00:00:00.000 Started by upstream project "autotest-per-patch" build number 132111 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.044 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.087 Fetching changes from the remote Git repository 00:00:00.089 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.297 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.297 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.570 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.582 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.594 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.594 > git config core.sparsecheckout # timeout=10 00:00:04.607 > git read-tree -mu HEAD # timeout=10 00:00:04.624 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.647 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.647 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.736 [Pipeline] Start of Pipeline 00:00:04.749 [Pipeline] library 00:00:04.750 Loading library shm_lib@master 00:00:04.750 Library shm_lib@master is cached. Copying from home. 00:00:04.767 [Pipeline] node 00:00:04.785 Running on CYP13 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.786 [Pipeline] { 00:00:04.797 [Pipeline] catchError 00:00:04.798 [Pipeline] { 00:00:04.811 [Pipeline] wrap 00:00:04.821 [Pipeline] { 00:00:04.830 [Pipeline] stage 00:00:04.832 [Pipeline] { (Prologue) 00:00:05.054 [Pipeline] sh 00:00:05.345 + logger -p user.info -t JENKINS-CI 00:00:05.364 [Pipeline] echo 00:00:05.366 Node: CYP13 00:00:05.375 [Pipeline] sh 00:00:05.685 [Pipeline] setCustomBuildProperty 00:00:05.696 [Pipeline] echo 00:00:05.698 Cleanup processes 00:00:05.704 [Pipeline] sh 00:00:05.997 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.997 1393666 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.013 [Pipeline] sh 00:00:06.307 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.307 ++ grep -v 'sudo pgrep' 00:00:06.307 ++ awk '{print $1}' 00:00:06.307 + sudo kill -9 00:00:06.307 + true 00:00:06.320 [Pipeline] cleanWs 00:00:06.329 [WS-CLEANUP] Deleting project workspace... 00:00:06.329 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.336 [WS-CLEANUP] done 00:00:06.339 [Pipeline] setCustomBuildProperty 00:00:06.351 [Pipeline] sh 00:00:06.639 + sudo git config --global --replace-all safe.directory '*' 00:00:06.776 [Pipeline] httpRequest 00:00:07.209 [Pipeline] echo 00:00:07.210 Sorcerer 10.211.164.101 is alive 00:00:07.220 [Pipeline] retry 00:00:07.222 [Pipeline] { 00:00:07.239 [Pipeline] httpRequest 00:00:07.244 HttpMethod: GET 00:00:07.244 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.245 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.262 Response Code: HTTP/1.1 200 OK 00:00:07.262 Success: Status code 200 is in the accepted range: 200,404 00:00:07.262 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.192 [Pipeline] } 00:00:10.208 [Pipeline] // retry 00:00:10.214 [Pipeline] sh 00:00:10.505 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.523 [Pipeline] httpRequest 00:00:10.901 [Pipeline] echo 00:00:10.903 Sorcerer 10.211.164.101 is alive 00:00:10.913 [Pipeline] retry 00:00:10.916 [Pipeline] { 00:00:10.932 [Pipeline] httpRequest 00:00:10.937 HttpMethod: GET 00:00:10.938 URL: http://10.211.164.101/packages/spdk_adaafacab30ec3dd3ba0d7b3bca835ee588a83a8.tar.gz 00:00:10.939 Sending request to url: http://10.211.164.101/packages/spdk_adaafacab30ec3dd3ba0d7b3bca835ee588a83a8.tar.gz 00:00:10.952 Response Code: HTTP/1.1 200 OK 00:00:10.953 Success: Status code 200 is in the accepted range: 200,404 00:00:10.953 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_adaafacab30ec3dd3ba0d7b3bca835ee588a83a8.tar.gz 00:00:43.449 [Pipeline] } 00:00:43.464 [Pipeline] // retry 00:00:43.472 [Pipeline] sh 00:00:43.764 + tar --no-same-owner -xf spdk_adaafacab30ec3dd3ba0d7b3bca835ee588a83a8.tar.gz 00:00:47.084 [Pipeline] sh 00:00:47.375 + git -C spdk log --oneline -n5 00:00:47.375 adaafacab bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:00:47.376 31341da86 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:00:47.376 cfcfe6c3e bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:00:47.376 4aa7d50c3 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:00:47.376 b1e5d8902 dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:00:47.391 [Pipeline] } 00:00:47.437 [Pipeline] // stage 00:00:47.450 [Pipeline] stage 00:00:47.453 [Pipeline] { (Prepare) 00:00:47.468 [Pipeline] writeFile 00:00:47.477 [Pipeline] sh 00:00:47.760 + logger -p user.info -t JENKINS-CI 00:00:47.773 [Pipeline] sh 00:00:48.062 + logger -p user.info -t JENKINS-CI 00:00:48.076 [Pipeline] sh 00:00:48.363 + cat autorun-spdk.conf 00:00:48.363 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.363 SPDK_TEST_NVMF=1 00:00:48.363 SPDK_TEST_NVME_CLI=1 00:00:48.363 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.363 SPDK_TEST_NVMF_NICS=e810 00:00:48.363 SPDK_TEST_VFIOUSER=1 00:00:48.363 SPDK_RUN_UBSAN=1 00:00:48.363 NET_TYPE=phy 00:00:48.370 RUN_NIGHTLY=0 00:00:48.374 [Pipeline] readFile 00:00:48.392 [Pipeline] withEnv 00:00:48.394 [Pipeline] { 00:00:48.404 [Pipeline] sh 00:00:48.692 + set -ex 00:00:48.692 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:48.692 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.692 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.692 ++ SPDK_TEST_NVMF=1 00:00:48.692 ++ SPDK_TEST_NVME_CLI=1 00:00:48.692 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.692 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.692 ++ SPDK_TEST_VFIOUSER=1 00:00:48.692 ++ SPDK_RUN_UBSAN=1 00:00:48.692 ++ NET_TYPE=phy 00:00:48.692 ++ RUN_NIGHTLY=0 00:00:48.692 + case $SPDK_TEST_NVMF_NICS in 00:00:48.692 + DRIVERS=ice 00:00:48.692 + [[ tcp == \r\d\m\a ]] 00:00:48.692 + [[ -n ice ]] 00:00:48.692 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:48.692 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:58.695 rmmod: ERROR: Module irdma is not currently loaded 00:00:58.695 rmmod: ERROR: Module i40iw is not currently loaded 00:00:58.695 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:58.695 + true 00:00:58.695 + for D in $DRIVERS 00:00:58.695 + sudo modprobe ice 00:00:58.695 + exit 0 00:00:58.706 [Pipeline] } 00:00:58.720 [Pipeline] // withEnv 00:00:58.726 [Pipeline] } 00:00:58.739 [Pipeline] // stage 00:00:58.749 [Pipeline] catchError 00:00:58.751 [Pipeline] { 00:00:58.765 [Pipeline] timeout 00:00:58.765 Timeout set to expire in 1 hr 0 min 00:00:58.767 [Pipeline] { 00:00:58.781 [Pipeline] stage 00:00:58.783 [Pipeline] { (Tests) 00:00:58.797 [Pipeline] sh 00:00:59.089 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.089 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.089 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.089 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:59.089 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.089 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.090 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:59.090 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.090 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.090 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.090 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:59.090 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.090 + source /etc/os-release 00:00:59.090 ++ NAME='Fedora Linux' 00:00:59.090 ++ VERSION='39 (Cloud Edition)' 00:00:59.090 ++ ID=fedora 00:00:59.090 ++ VERSION_ID=39 00:00:59.090 ++ VERSION_CODENAME= 00:00:59.090 ++ PLATFORM_ID=platform:f39 00:00:59.090 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:59.090 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:59.090 ++ LOGO=fedora-logo-icon 00:00:59.090 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:59.090 ++ HOME_URL=https://fedoraproject.org/ 00:00:59.090 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:59.090 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:59.090 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:59.090 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:59.090 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:59.090 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:59.090 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:59.090 ++ SUPPORT_END=2024-11-12 00:00:59.090 ++ VARIANT='Cloud Edition' 00:00:59.090 ++ VARIANT_ID=cloud 00:00:59.090 + uname -a 00:00:59.090 Linux spdk-cyp-13 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:59.090 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:02.388 Hugepages 00:01:02.388 node hugesize free / total 00:01:02.388 node0 1048576kB 0 / 0 00:01:02.388 node0 2048kB 0 / 0 00:01:02.388 node1 1048576kB 0 / 0 00:01:02.388 node1 2048kB 0 / 0 00:01:02.388 00:01:02.388 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:02.388 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:02.388 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:02.388 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:02.388 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:02.388 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:02.388 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:02.388 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:02.388 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:02.388 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:02.388 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:02.388 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:02.388 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:02.388 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:02.388 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:02.388 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:02.388 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:02.388 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:02.388 + rm -f /tmp/spdk-ld-path 00:01:02.388 + source autorun-spdk.conf 00:01:02.388 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.388 ++ SPDK_TEST_NVMF=1 00:01:02.388 ++ SPDK_TEST_NVME_CLI=1 00:01:02.388 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.388 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.388 ++ SPDK_TEST_VFIOUSER=1 00:01:02.388 ++ SPDK_RUN_UBSAN=1 00:01:02.388 ++ NET_TYPE=phy 00:01:02.388 ++ RUN_NIGHTLY=0 00:01:02.388 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:02.388 + [[ -n '' ]] 00:01:02.388 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.388 + for M in /var/spdk/build-*-manifest.txt 00:01:02.388 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:02.388 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.388 + for M in /var/spdk/build-*-manifest.txt 00:01:02.388 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:02.389 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.389 + for M in /var/spdk/build-*-manifest.txt 00:01:02.389 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:02.389 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.389 ++ uname 00:01:02.389 + [[ Linux == \L\i\n\u\x ]] 00:01:02.389 + sudo dmesg -T 00:01:02.389 + sudo dmesg --clear 00:01:02.389 + dmesg_pid=1395235 00:01:02.389 + [[ Fedora Linux == FreeBSD ]] 00:01:02.389 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.389 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.389 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:02.389 + [[ -x /usr/src/fio-static/fio ]] 00:01:02.389 + export FIO_BIN=/usr/src/fio-static/fio 00:01:02.389 + FIO_BIN=/usr/src/fio-static/fio 00:01:02.389 + sudo dmesg -Tw 00:01:02.389 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:02.389 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:02.389 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:02.389 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.389 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.389 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:02.389 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.389 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.389 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.651 12:57:44 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:02.651 12:57:44 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.651 12:57:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.651 12:57:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:02.651 12:57:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:02.651 12:57:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.651 12:57:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:02.651 12:57:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:02.651 12:57:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:02.651 12:57:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:02.651 12:57:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:02.651 12:57:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:02.651 12:57:44 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.651 12:57:44 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:02.651 12:57:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:02.651 12:57:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:02.651 12:57:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:02.651 12:57:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:02.651 12:57:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:02.651 12:57:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.651 12:57:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.651 12:57:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.651 12:57:44 -- paths/export.sh@5 -- $ export PATH 00:01:02.651 12:57:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.651 12:57:44 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:02.651 12:57:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:02.651 12:57:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730894264.XXXXXX 00:01:02.651 12:57:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730894264.Wo8Mbc 00:01:02.651 12:57:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:02.651 12:57:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:02.651 12:57:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:02.651 12:57:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:02.651 12:57:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:02.651 12:57:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:02.651 12:57:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:02.651 12:57:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.651 12:57:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:02.651 12:57:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:02.651 12:57:44 -- pm/common@17 -- $ local monitor 00:01:02.651 12:57:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.651 12:57:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.651 12:57:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.651 12:57:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.651 12:57:44 -- pm/common@21 -- $ date +%s 00:01:02.651 12:57:44 -- pm/common@25 -- $ sleep 1 00:01:02.651 12:57:44 -- pm/common@21 -- $ date +%s 00:01:02.651 12:57:44 -- pm/common@21 -- $ date +%s 00:01:02.651 12:57:44 -- pm/common@21 -- $ date +%s 00:01:02.651 12:57:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730894264 00:01:02.651 12:57:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730894264 00:01:02.651 12:57:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730894264 00:01:02.651 12:57:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730894264 00:01:02.651 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730894264_collect-cpu-load.pm.log 00:01:02.651 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730894264_collect-vmstat.pm.log 00:01:02.651 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730894264_collect-cpu-temp.pm.log 00:01:02.651 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730894264_collect-bmc-pm.bmc.pm.log 00:01:03.594 12:57:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:03.594 12:57:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:03.594 12:57:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:03.594 12:57:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.594 12:57:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:03.594 Wed Nov 6 11:57:45 AM UTC 2024 00:01:03.594 12:57:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:03.594 v25.01-pre-177-gadaafacab 00:01:03.594 12:57:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:03.594 12:57:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:03.594 12:57:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:03.594 12:57:45 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:03.594 12:57:45 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:03.594 12:57:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.855 ************************************ 00:01:03.855 START TEST ubsan 00:01:03.855 ************************************ 00:01:03.855 12:57:45 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:03.855 using ubsan 00:01:03.855 00:01:03.855 real 0m0.001s 00:01:03.855 user 0m0.000s 00:01:03.855 sys 0m0.000s 00:01:03.855 12:57:45 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:03.855 12:57:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:03.855 ************************************ 00:01:03.855 END TEST ubsan 00:01:03.855 ************************************ 00:01:03.855 12:57:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:03.855 12:57:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:03.855 12:57:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:03.855 12:57:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:03.855 12:57:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:03.855 12:57:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:03.855 12:57:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:03.855 12:57:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:03.855 12:57:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:03.855 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:03.855 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:04.426 Using 'verbs' RDMA provider 00:01:20.283 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:32.515 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:33.087 Creating mk/config.mk...done. 00:01:33.087 Creating mk/cc.flags.mk...done. 00:01:33.087 Type 'make' to build. 00:01:33.087 12:58:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:33.087 12:58:14 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:33.087 12:58:14 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:33.087 12:58:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.087 ************************************ 00:01:33.087 START TEST make 00:01:33.087 ************************************ 00:01:33.087 12:58:14 make -- common/autotest_common.sh@1127 -- $ make -j144 00:01:33.348 make[1]: Nothing to be done for 'all'. 00:01:35.263 The Meson build system 00:01:35.263 Version: 1.5.0 00:01:35.263 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:35.263 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.263 Build type: native build 00:01:35.263 Project name: libvfio-user 00:01:35.263 Project version: 0.0.1 00:01:35.263 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:35.263 C linker for the host machine: cc ld.bfd 2.40-14 00:01:35.263 Host machine cpu family: x86_64 00:01:35.263 Host machine cpu: x86_64 00:01:35.263 Run-time dependency threads found: YES 00:01:35.263 Library dl found: YES 00:01:35.263 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:35.263 Run-time dependency json-c found: YES 0.17 00:01:35.263 Run-time dependency cmocka found: YES 1.1.7 00:01:35.263 Program pytest-3 found: NO 00:01:35.263 Program flake8 found: NO 00:01:35.263 Program misspell-fixer found: NO 00:01:35.263 Program restructuredtext-lint found: NO 00:01:35.263 Program valgrind found: YES (/usr/bin/valgrind) 00:01:35.263 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.263 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.263 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.263 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:35.263 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:35.263 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:35.263 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:35.263 Build targets in project: 8 00:01:35.263 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:35.263 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:35.263 00:01:35.263 libvfio-user 0.0.1 00:01:35.263 00:01:35.263 User defined options 00:01:35.263 buildtype : debug 00:01:35.263 default_library: shared 00:01:35.263 libdir : /usr/local/lib 00:01:35.263 00:01:35.263 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.263 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:35.523 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:35.523 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:35.523 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:35.523 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:35.523 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:35.523 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:35.523 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:35.523 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:35.523 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:35.523 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:35.523 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:35.523 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:35.523 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:35.523 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:35.523 [15/37] Compiling C object samples/null.p/null.c.o 00:01:35.523 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:35.523 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:35.523 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:35.523 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:35.523 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:35.523 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:35.523 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:35.523 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:35.523 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:35.523 [25/37] Compiling C object samples/server.p/server.c.o 00:01:35.523 [26/37] Compiling C object samples/client.p/client.c.o 00:01:35.523 [27/37] Linking target samples/client 00:01:35.523 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:35.523 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:35.523 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:35.784 [31/37] Linking target test/unit_tests 00:01:35.784 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:35.784 [33/37] Linking target samples/null 00:01:35.784 [34/37] Linking target samples/lspci 00:01:35.784 [35/37] Linking target samples/server 00:01:35.784 [36/37] Linking target samples/gpio-pci-idio-16 00:01:35.784 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:35.784 INFO: autodetecting backend as ninja 00:01:35.784 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.045 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.307 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:36.307 ninja: no work to do. 00:01:42.907 The Meson build system 00:01:42.907 Version: 1.5.0 00:01:42.907 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:42.907 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:42.907 Build type: native build 00:01:42.907 Program cat found: YES (/usr/bin/cat) 00:01:42.907 Project name: DPDK 00:01:42.907 Project version: 24.03.0 00:01:42.907 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:42.907 C linker for the host machine: cc ld.bfd 2.40-14 00:01:42.907 Host machine cpu family: x86_64 00:01:42.907 Host machine cpu: x86_64 00:01:42.907 Message: ## Building in Developer Mode ## 00:01:42.907 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:42.907 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:42.907 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:42.907 Program python3 found: YES (/usr/bin/python3) 00:01:42.907 Program cat found: YES (/usr/bin/cat) 00:01:42.907 Compiler for C supports arguments -march=native: YES 00:01:42.907 Checking for size of "void *" : 8 00:01:42.907 Checking for size of "void *" : 8 (cached) 00:01:42.907 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:42.907 Library m found: YES 00:01:42.907 Library numa found: YES 00:01:42.907 Has header "numaif.h" : YES 00:01:42.907 Library fdt found: NO 00:01:42.907 Library execinfo found: NO 00:01:42.907 Has header "execinfo.h" : YES 00:01:42.907 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:42.907 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:42.907 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:42.907 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:42.907 Run-time dependency openssl found: YES 3.1.1 00:01:42.907 Run-time dependency libpcap found: YES 1.10.4 00:01:42.907 Has header "pcap.h" with dependency libpcap: YES 00:01:42.907 Compiler for C supports arguments -Wcast-qual: YES 00:01:42.907 Compiler for C supports arguments -Wdeprecated: YES 00:01:42.907 Compiler for C supports arguments -Wformat: YES 00:01:42.907 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:42.907 Compiler for C supports arguments -Wformat-security: NO 00:01:42.907 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:42.907 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:42.907 Compiler for C supports arguments -Wnested-externs: YES 00:01:42.907 Compiler for C supports arguments -Wold-style-definition: YES 00:01:42.907 Compiler for C supports arguments -Wpointer-arith: YES 00:01:42.907 Compiler for C supports arguments -Wsign-compare: YES 00:01:42.907 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:42.907 Compiler for C supports arguments -Wundef: YES 00:01:42.907 Compiler for C supports arguments -Wwrite-strings: YES 00:01:42.907 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:42.907 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:42.907 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:42.907 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:42.907 Program objdump found: YES (/usr/bin/objdump) 00:01:42.907 Compiler for C supports arguments -mavx512f: YES 00:01:42.907 Checking if "AVX512 checking" compiles: YES 00:01:42.907 Fetching value of define "__SSE4_2__" : 1 00:01:42.907 Fetching value of define "__AES__" : 1 00:01:42.907 Fetching value of define "__AVX__" : 1 00:01:42.907 Fetching value of define "__AVX2__" : 1 00:01:42.907 Fetching value of define "__AVX512BW__" : 1 00:01:42.907 Fetching value of define "__AVX512CD__" : 1 00:01:42.907 Fetching value of define "__AVX512DQ__" : 1 00:01:42.907 Fetching value of define "__AVX512F__" : 1 00:01:42.907 Fetching value of define "__AVX512VL__" : 1 00:01:42.907 Fetching value of define "__PCLMUL__" : 1 00:01:42.907 Fetching value of define "__RDRND__" : 1 00:01:42.907 Fetching value of define "__RDSEED__" : 1 00:01:42.907 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:42.907 Fetching value of define "__znver1__" : (undefined) 00:01:42.907 Fetching value of define "__znver2__" : (undefined) 00:01:42.907 Fetching value of define "__znver3__" : (undefined) 00:01:42.907 Fetching value of define "__znver4__" : (undefined) 00:01:42.907 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:42.907 Message: lib/log: Defining dependency "log" 00:01:42.907 Message: lib/kvargs: Defining dependency "kvargs" 00:01:42.907 Message: lib/telemetry: Defining dependency "telemetry" 00:01:42.907 Checking for function "getentropy" : NO 00:01:42.907 Message: lib/eal: Defining dependency "eal" 00:01:42.907 Message: lib/ring: Defining dependency "ring" 00:01:42.907 Message: lib/rcu: Defining dependency "rcu" 00:01:42.907 Message: lib/mempool: Defining dependency "mempool" 00:01:42.907 Message: lib/mbuf: Defining dependency "mbuf" 00:01:42.907 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:42.907 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:42.907 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:42.907 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:42.907 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:42.907 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:42.907 Compiler for C supports arguments -mpclmul: YES 00:01:42.907 Compiler for C supports arguments -maes: YES 00:01:42.907 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:42.907 Compiler for C supports arguments -mavx512bw: YES 00:01:42.907 Compiler for C supports arguments -mavx512dq: YES 00:01:42.907 Compiler for C supports arguments -mavx512vl: YES 00:01:42.907 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:42.907 Compiler for C supports arguments -mavx2: YES 00:01:42.907 Compiler for C supports arguments -mavx: YES 00:01:42.907 Message: lib/net: Defining dependency "net" 00:01:42.907 Message: lib/meter: Defining dependency "meter" 00:01:42.907 Message: lib/ethdev: Defining dependency "ethdev" 00:01:42.907 Message: lib/pci: Defining dependency "pci" 00:01:42.907 Message: lib/cmdline: Defining dependency "cmdline" 00:01:42.907 Message: lib/hash: Defining dependency "hash" 00:01:42.907 Message: lib/timer: Defining dependency "timer" 00:01:42.907 Message: lib/compressdev: Defining dependency "compressdev" 00:01:42.907 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:42.907 Message: lib/dmadev: Defining dependency "dmadev" 00:01:42.907 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:42.907 Message: lib/power: Defining dependency "power" 00:01:42.907 Message: lib/reorder: Defining dependency "reorder" 00:01:42.907 Message: lib/security: Defining dependency "security" 00:01:42.907 Has header "linux/userfaultfd.h" : YES 00:01:42.907 Has header "linux/vduse.h" : YES 00:01:42.907 Message: lib/vhost: Defining dependency "vhost" 00:01:42.907 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:42.907 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:42.907 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:42.907 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:42.907 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:42.907 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:42.907 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:42.907 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:42.907 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:42.907 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:42.907 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:42.907 Configuring doxy-api-html.conf using configuration 00:01:42.907 Configuring doxy-api-man.conf using configuration 00:01:42.907 Program mandb found: YES (/usr/bin/mandb) 00:01:42.907 Program sphinx-build found: NO 00:01:42.907 Configuring rte_build_config.h using configuration 00:01:42.907 Message: 00:01:42.907 ================= 00:01:42.907 Applications Enabled 00:01:42.907 ================= 00:01:42.907 00:01:42.907 apps: 00:01:42.907 00:01:42.907 00:01:42.907 Message: 00:01:42.907 ================= 00:01:42.907 Libraries Enabled 00:01:42.907 ================= 00:01:42.907 00:01:42.907 libs: 00:01:42.907 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:42.907 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:42.907 cryptodev, dmadev, power, reorder, security, vhost, 00:01:42.907 00:01:42.907 Message: 00:01:42.907 =============== 00:01:42.907 Drivers Enabled 00:01:42.907 =============== 00:01:42.907 00:01:42.907 common: 00:01:42.907 00:01:42.907 bus: 00:01:42.907 pci, vdev, 00:01:42.907 mempool: 00:01:42.907 ring, 00:01:42.907 dma: 00:01:42.907 00:01:42.907 net: 00:01:42.907 00:01:42.907 crypto: 00:01:42.907 00:01:42.907 compress: 00:01:42.907 00:01:42.907 vdpa: 00:01:42.907 00:01:42.907 00:01:42.907 Message: 00:01:42.907 ================= 00:01:42.907 Content Skipped 00:01:42.907 ================= 00:01:42.907 00:01:42.907 apps: 00:01:42.907 dumpcap: explicitly disabled via build config 00:01:42.907 graph: explicitly disabled via build config 00:01:42.907 pdump: explicitly disabled via build config 00:01:42.908 proc-info: explicitly disabled via build config 00:01:42.908 test-acl: explicitly disabled via build config 00:01:42.908 test-bbdev: explicitly disabled via build config 00:01:42.908 test-cmdline: explicitly disabled via build config 00:01:42.908 test-compress-perf: explicitly disabled via build config 00:01:42.908 test-crypto-perf: explicitly disabled via build config 00:01:42.908 test-dma-perf: explicitly disabled via build config 00:01:42.908 test-eventdev: explicitly disabled via build config 00:01:42.908 test-fib: explicitly disabled via build config 00:01:42.908 test-flow-perf: explicitly disabled via build config 00:01:42.908 test-gpudev: explicitly disabled via build config 00:01:42.908 test-mldev: explicitly disabled via build config 00:01:42.908 test-pipeline: explicitly disabled via build config 00:01:42.908 test-pmd: explicitly disabled via build config 00:01:42.908 test-regex: explicitly disabled via build config 00:01:42.908 test-sad: explicitly disabled via build config 00:01:42.908 test-security-perf: explicitly disabled via build config 00:01:42.908 00:01:42.908 libs: 00:01:42.908 argparse: explicitly disabled via build config 00:01:42.908 metrics: explicitly disabled via build config 00:01:42.908 acl: explicitly disabled via build config 00:01:42.908 bbdev: explicitly disabled via build config 00:01:42.908 bitratestats: explicitly disabled via build config 00:01:42.908 bpf: explicitly disabled via build config 00:01:42.908 cfgfile: explicitly disabled via build config 00:01:42.908 distributor: explicitly disabled via build config 00:01:42.908 efd: explicitly disabled via build config 00:01:42.908 eventdev: explicitly disabled via build config 00:01:42.908 dispatcher: explicitly disabled via build config 00:01:42.908 gpudev: explicitly disabled via build config 00:01:42.908 gro: explicitly disabled via build config 00:01:42.908 gso: explicitly disabled via build config 00:01:42.908 ip_frag: explicitly disabled via build config 00:01:42.908 jobstats: explicitly disabled via build config 00:01:42.908 latencystats: explicitly disabled via build config 00:01:42.908 lpm: explicitly disabled via build config 00:01:42.908 member: explicitly disabled via build config 00:01:42.908 pcapng: explicitly disabled via build config 00:01:42.908 rawdev: explicitly disabled via build config 00:01:42.908 regexdev: explicitly disabled via build config 00:01:42.908 mldev: explicitly disabled via build config 00:01:42.908 rib: explicitly disabled via build config 00:01:42.908 sched: explicitly disabled via build config 00:01:42.908 stack: explicitly disabled via build config 00:01:42.908 ipsec: explicitly disabled via build config 00:01:42.908 pdcp: explicitly disabled via build config 00:01:42.908 fib: explicitly disabled via build config 00:01:42.908 port: explicitly disabled via build config 00:01:42.908 pdump: explicitly disabled via build config 00:01:42.908 table: explicitly disabled via build config 00:01:42.908 pipeline: explicitly disabled via build config 00:01:42.908 graph: explicitly disabled via build config 00:01:42.908 node: explicitly disabled via build config 00:01:42.908 00:01:42.908 drivers: 00:01:42.908 common/cpt: not in enabled drivers build config 00:01:42.908 common/dpaax: not in enabled drivers build config 00:01:42.908 common/iavf: not in enabled drivers build config 00:01:42.908 common/idpf: not in enabled drivers build config 00:01:42.908 common/ionic: not in enabled drivers build config 00:01:42.908 common/mvep: not in enabled drivers build config 00:01:42.908 common/octeontx: not in enabled drivers build config 00:01:42.908 bus/auxiliary: not in enabled drivers build config 00:01:42.908 bus/cdx: not in enabled drivers build config 00:01:42.908 bus/dpaa: not in enabled drivers build config 00:01:42.908 bus/fslmc: not in enabled drivers build config 00:01:42.908 bus/ifpga: not in enabled drivers build config 00:01:42.908 bus/platform: not in enabled drivers build config 00:01:42.908 bus/uacce: not in enabled drivers build config 00:01:42.908 bus/vmbus: not in enabled drivers build config 00:01:42.908 common/cnxk: not in enabled drivers build config 00:01:42.908 common/mlx5: not in enabled drivers build config 00:01:42.908 common/nfp: not in enabled drivers build config 00:01:42.908 common/nitrox: not in enabled drivers build config 00:01:42.908 common/qat: not in enabled drivers build config 00:01:42.908 common/sfc_efx: not in enabled drivers build config 00:01:42.908 mempool/bucket: not in enabled drivers build config 00:01:42.908 mempool/cnxk: not in enabled drivers build config 00:01:42.908 mempool/dpaa: not in enabled drivers build config 00:01:42.908 mempool/dpaa2: not in enabled drivers build config 00:01:42.908 mempool/octeontx: not in enabled drivers build config 00:01:42.908 mempool/stack: not in enabled drivers build config 00:01:42.908 dma/cnxk: not in enabled drivers build config 00:01:42.908 dma/dpaa: not in enabled drivers build config 00:01:42.908 dma/dpaa2: not in enabled drivers build config 00:01:42.908 dma/hisilicon: not in enabled drivers build config 00:01:42.908 dma/idxd: not in enabled drivers build config 00:01:42.908 dma/ioat: not in enabled drivers build config 00:01:42.908 dma/skeleton: not in enabled drivers build config 00:01:42.908 net/af_packet: not in enabled drivers build config 00:01:42.908 net/af_xdp: not in enabled drivers build config 00:01:42.908 net/ark: not in enabled drivers build config 00:01:42.908 net/atlantic: not in enabled drivers build config 00:01:42.908 net/avp: not in enabled drivers build config 00:01:42.908 net/axgbe: not in enabled drivers build config 00:01:42.908 net/bnx2x: not in enabled drivers build config 00:01:42.908 net/bnxt: not in enabled drivers build config 00:01:42.908 net/bonding: not in enabled drivers build config 00:01:42.908 net/cnxk: not in enabled drivers build config 00:01:42.908 net/cpfl: not in enabled drivers build config 00:01:42.908 net/cxgbe: not in enabled drivers build config 00:01:42.908 net/dpaa: not in enabled drivers build config 00:01:42.908 net/dpaa2: not in enabled drivers build config 00:01:42.908 net/e1000: not in enabled drivers build config 00:01:42.908 net/ena: not in enabled drivers build config 00:01:42.908 net/enetc: not in enabled drivers build config 00:01:42.908 net/enetfec: not in enabled drivers build config 00:01:42.908 net/enic: not in enabled drivers build config 00:01:42.908 net/failsafe: not in enabled drivers build config 00:01:42.908 net/fm10k: not in enabled drivers build config 00:01:42.908 net/gve: not in enabled drivers build config 00:01:42.908 net/hinic: not in enabled drivers build config 00:01:42.908 net/hns3: not in enabled drivers build config 00:01:42.908 net/i40e: not in enabled drivers build config 00:01:42.908 net/iavf: not in enabled drivers build config 00:01:42.908 net/ice: not in enabled drivers build config 00:01:42.908 net/idpf: not in enabled drivers build config 00:01:42.908 net/igc: not in enabled drivers build config 00:01:42.908 net/ionic: not in enabled drivers build config 00:01:42.908 net/ipn3ke: not in enabled drivers build config 00:01:42.908 net/ixgbe: not in enabled drivers build config 00:01:42.908 net/mana: not in enabled drivers build config 00:01:42.908 net/memif: not in enabled drivers build config 00:01:42.908 net/mlx4: not in enabled drivers build config 00:01:42.908 net/mlx5: not in enabled drivers build config 00:01:42.908 net/mvneta: not in enabled drivers build config 00:01:42.908 net/mvpp2: not in enabled drivers build config 00:01:42.908 net/netvsc: not in enabled drivers build config 00:01:42.908 net/nfb: not in enabled drivers build config 00:01:42.908 net/nfp: not in enabled drivers build config 00:01:42.908 net/ngbe: not in enabled drivers build config 00:01:42.908 net/null: not in enabled drivers build config 00:01:42.908 net/octeontx: not in enabled drivers build config 00:01:42.908 net/octeon_ep: not in enabled drivers build config 00:01:42.908 net/pcap: not in enabled drivers build config 00:01:42.908 net/pfe: not in enabled drivers build config 00:01:42.908 net/qede: not in enabled drivers build config 00:01:42.908 net/ring: not in enabled drivers build config 00:01:42.908 net/sfc: not in enabled drivers build config 00:01:42.908 net/softnic: not in enabled drivers build config 00:01:42.908 net/tap: not in enabled drivers build config 00:01:42.908 net/thunderx: not in enabled drivers build config 00:01:42.908 net/txgbe: not in enabled drivers build config 00:01:42.908 net/vdev_netvsc: not in enabled drivers build config 00:01:42.908 net/vhost: not in enabled drivers build config 00:01:42.908 net/virtio: not in enabled drivers build config 00:01:42.908 net/vmxnet3: not in enabled drivers build config 00:01:42.908 raw/*: missing internal dependency, "rawdev" 00:01:42.908 crypto/armv8: not in enabled drivers build config 00:01:42.908 crypto/bcmfs: not in enabled drivers build config 00:01:42.908 crypto/caam_jr: not in enabled drivers build config 00:01:42.908 crypto/ccp: not in enabled drivers build config 00:01:42.908 crypto/cnxk: not in enabled drivers build config 00:01:42.908 crypto/dpaa_sec: not in enabled drivers build config 00:01:42.908 crypto/dpaa2_sec: not in enabled drivers build config 00:01:42.908 crypto/ipsec_mb: not in enabled drivers build config 00:01:42.908 crypto/mlx5: not in enabled drivers build config 00:01:42.908 crypto/mvsam: not in enabled drivers build config 00:01:42.908 crypto/nitrox: not in enabled drivers build config 00:01:42.908 crypto/null: not in enabled drivers build config 00:01:42.908 crypto/octeontx: not in enabled drivers build config 00:01:42.908 crypto/openssl: not in enabled drivers build config 00:01:42.908 crypto/scheduler: not in enabled drivers build config 00:01:42.908 crypto/uadk: not in enabled drivers build config 00:01:42.908 crypto/virtio: not in enabled drivers build config 00:01:42.908 compress/isal: not in enabled drivers build config 00:01:42.908 compress/mlx5: not in enabled drivers build config 00:01:42.908 compress/nitrox: not in enabled drivers build config 00:01:42.908 compress/octeontx: not in enabled drivers build config 00:01:42.908 compress/zlib: not in enabled drivers build config 00:01:42.908 regex/*: missing internal dependency, "regexdev" 00:01:42.908 ml/*: missing internal dependency, "mldev" 00:01:42.908 vdpa/ifc: not in enabled drivers build config 00:01:42.908 vdpa/mlx5: not in enabled drivers build config 00:01:42.908 vdpa/nfp: not in enabled drivers build config 00:01:42.908 vdpa/sfc: not in enabled drivers build config 00:01:42.908 event/*: missing internal dependency, "eventdev" 00:01:42.908 baseband/*: missing internal dependency, "bbdev" 00:01:42.908 gpu/*: missing internal dependency, "gpudev" 00:01:42.908 00:01:42.908 00:01:42.908 Build targets in project: 84 00:01:42.908 00:01:42.908 DPDK 24.03.0 00:01:42.908 00:01:42.908 User defined options 00:01:42.908 buildtype : debug 00:01:42.908 default_library : shared 00:01:42.908 libdir : lib 00:01:42.908 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:42.908 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:42.908 c_link_args : 00:01:42.908 cpu_instruction_set: native 00:01:42.908 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:42.908 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:42.908 enable_docs : false 00:01:42.908 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:42.908 enable_kmods : false 00:01:42.908 max_lcores : 128 00:01:42.908 tests : false 00:01:42.908 00:01:42.908 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.908 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:42.908 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:42.908 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:42.908 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:42.908 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:42.908 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:42.908 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:42.908 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:43.169 [8/267] Linking static target lib/librte_kvargs.a 00:01:43.169 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:43.169 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:43.169 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:43.169 [12/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:43.169 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:43.169 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:43.169 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:43.169 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:43.169 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:43.169 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:43.169 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:43.169 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:43.169 [21/267] Linking static target lib/librte_log.a 00:01:43.169 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:43.169 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:43.169 [24/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:43.169 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:43.169 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:43.169 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:43.169 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:43.169 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:43.169 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:43.169 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:43.169 [32/267] Linking static target lib/librte_pci.a 00:01:43.169 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:43.169 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:43.169 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:43.427 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:43.427 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:43.427 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:43.428 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:43.428 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.428 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:43.428 [42/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:43.428 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:43.428 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:43.428 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:43.428 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:43.428 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:43.428 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:43.428 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:43.687 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:43.687 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:43.687 [52/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:43.687 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:43.687 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:43.687 [55/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.687 [56/267] Linking static target lib/librte_timer.a 00:01:43.687 [57/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:43.687 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:43.687 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:43.687 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:43.687 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:43.687 [62/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:43.687 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:43.687 [64/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:43.687 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:43.687 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:43.687 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:43.687 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:43.687 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:43.687 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:43.687 [71/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:43.687 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:43.687 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:43.687 [74/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:43.687 [75/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:43.687 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:43.687 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:43.687 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:43.687 [79/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:43.688 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:43.688 [81/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:43.688 [82/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:43.688 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:43.688 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:43.688 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:43.688 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:43.688 [87/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:43.688 [88/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:43.688 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:43.688 [90/267] Linking static target lib/librte_telemetry.a 00:01:43.688 [91/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:43.688 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:43.688 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:43.688 [94/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:43.688 [95/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:43.688 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:43.688 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:43.688 [98/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:43.688 [99/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:43.688 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:43.688 [101/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:43.688 [102/267] Linking static target lib/librte_ring.a 00:01:43.688 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:43.688 [104/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:43.688 [105/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:43.688 [106/267] Linking static target lib/librte_dmadev.a 00:01:43.688 [107/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:43.688 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:43.688 [109/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:43.688 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:43.688 [111/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:43.688 [112/267] Linking static target lib/librte_meter.a 00:01:43.688 [113/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:43.688 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:43.688 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:43.688 [116/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:43.688 [117/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:43.688 [118/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:43.688 [119/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:43.688 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:43.688 [121/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:43.688 [122/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:43.688 [123/267] Linking static target lib/librte_rcu.a 00:01:43.688 [124/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:43.688 [125/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:43.688 [126/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:43.688 [127/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:43.688 [128/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:43.688 [129/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:43.688 [130/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:43.688 [131/267] Linking static target lib/librte_mempool.a 00:01:43.688 [132/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:43.688 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:43.688 [134/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:43.688 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:43.688 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:43.688 [137/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:43.688 [138/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:43.688 [139/267] Linking static target lib/librte_compressdev.a 00:01:43.688 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:43.688 [141/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:43.688 [142/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:43.688 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:43.688 [144/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:43.688 [145/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:43.688 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:43.688 [147/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:43.688 [148/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:43.688 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:43.688 [150/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:43.688 [151/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:43.688 [152/267] Linking static target lib/librte_power.a 00:01:43.688 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:43.688 [154/267] Linking static target lib/librte_cmdline.a 00:01:43.688 [155/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.688 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:43.688 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:43.688 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:43.688 [159/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:43.688 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:43.688 [161/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:43.688 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:43.688 [163/267] Linking static target lib/librte_reorder.a 00:01:43.948 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:43.948 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:43.948 [166/267] Linking target lib/librte_log.so.24.1 00:01:43.948 [167/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:43.948 [168/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:43.948 [169/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:43.948 [170/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:43.948 [171/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:43.948 [172/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:43.948 [173/267] Linking static target lib/librte_net.a 00:01:43.948 [174/267] Linking static target lib/librte_eal.a 00:01:43.948 [175/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:43.948 [176/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:43.948 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:43.948 [178/267] Linking static target lib/librte_security.a 00:01:43.948 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:43.948 [180/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:43.948 [181/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:43.948 [182/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:43.948 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:43.948 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.948 [185/267] Linking static target lib/librte_mbuf.a 00:01:43.948 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.948 [187/267] Linking static target drivers/librte_bus_vdev.a 00:01:43.949 [188/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.949 [189/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.949 [190/267] Linking target lib/librte_kvargs.so.24.1 00:01:43.949 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.949 [192/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:43.949 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:43.949 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.949 [195/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.949 [196/267] Linking static target drivers/librte_bus_pci.a 00:01:44.208 [197/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:44.208 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:44.208 [199/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.208 [200/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.208 [201/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:44.208 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.208 [203/267] Linking static target lib/librte_hash.a 00:01:44.208 [204/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:44.208 [205/267] Linking static target drivers/librte_mempool_ring.a 00:01:44.208 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:44.208 [207/267] Linking static target lib/librte_cryptodev.a 00:01:44.208 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:44.208 [209/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.208 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.208 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:44.208 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.469 [213/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.469 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.469 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:44.469 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.469 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:44.469 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.731 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.731 [220/267] Linking static target lib/librte_ethdev.a 00:01:44.731 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.731 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.993 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.993 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.274 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.274 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.554 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:45.554 [228/267] Linking static target lib/librte_vhost.a 00:01:46.543 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.927 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.513 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.458 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.458 [233/267] Linking target lib/librte_eal.so.24.1 00:01:55.459 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:55.720 [235/267] Linking target lib/librte_meter.so.24.1 00:01:55.720 [236/267] Linking target lib/librte_ring.so.24.1 00:01:55.720 [237/267] Linking target lib/librte_timer.so.24.1 00:01:55.720 [238/267] Linking target lib/librte_pci.so.24.1 00:01:55.720 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:55.720 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:55.720 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:55.720 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:55.720 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:55.720 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:55.720 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:55.720 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:55.720 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:55.720 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:55.981 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:55.981 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:55.981 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:55.981 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:56.243 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:56.243 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:56.243 [255/267] Linking target lib/librte_net.so.24.1 00:01:56.243 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:56.243 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:56.243 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:56.243 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:56.243 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:56.243 [261/267] Linking target lib/librte_security.so.24.1 00:01:56.243 [262/267] Linking target lib/librte_hash.so.24.1 00:01:56.503 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:56.503 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:56.503 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:56.503 [266/267] Linking target lib/librte_power.so.24.1 00:01:56.503 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:56.503 INFO: autodetecting backend as ninja 00:01:56.503 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:00.714 CC lib/ut_mock/mock.o 00:02:00.714 CC lib/log/log.o 00:02:00.714 CC lib/log/log_flags.o 00:02:00.714 CC lib/log/log_deprecated.o 00:02:00.714 CC lib/ut/ut.o 00:02:00.714 LIB libspdk_ut.a 00:02:00.714 LIB libspdk_log.a 00:02:00.714 LIB libspdk_ut_mock.a 00:02:00.714 SO libspdk_ut.so.2.0 00:02:00.714 SO libspdk_log.so.7.1 00:02:00.714 SO libspdk_ut_mock.so.6.0 00:02:00.714 SYMLINK libspdk_ut.so 00:02:00.714 SYMLINK libspdk_log.so 00:02:00.714 SYMLINK libspdk_ut_mock.so 00:02:00.714 CC lib/util/base64.o 00:02:00.714 CC lib/util/bit_array.o 00:02:00.714 CC lib/util/cpuset.o 00:02:00.714 CC lib/dma/dma.o 00:02:00.714 CC lib/util/crc16.o 00:02:00.714 CXX lib/trace_parser/trace.o 00:02:00.714 CC lib/ioat/ioat.o 00:02:00.714 CC lib/util/crc32.o 00:02:00.714 CC lib/util/crc32c.o 00:02:00.714 CC lib/util/crc32_ieee.o 00:02:00.714 CC lib/util/crc64.o 00:02:00.714 CC lib/util/dif.o 00:02:00.714 CC lib/util/fd.o 00:02:00.714 CC lib/util/fd_group.o 00:02:00.714 CC lib/util/file.o 00:02:00.714 CC lib/util/hexlify.o 00:02:00.714 CC lib/util/iov.o 00:02:00.714 CC lib/util/math.o 00:02:00.714 CC lib/util/net.o 00:02:00.714 CC lib/util/pipe.o 00:02:00.714 CC lib/util/strerror_tls.o 00:02:00.714 CC lib/util/string.o 00:02:00.714 CC lib/util/uuid.o 00:02:00.714 CC lib/util/xor.o 00:02:00.714 CC lib/util/zipf.o 00:02:00.714 CC lib/util/md5.o 00:02:00.976 CC lib/vfio_user/host/vfio_user.o 00:02:00.976 CC lib/vfio_user/host/vfio_user_pci.o 00:02:00.976 LIB libspdk_dma.a 00:02:00.976 LIB libspdk_ioat.a 00:02:00.976 SO libspdk_dma.so.5.0 00:02:00.976 SO libspdk_ioat.so.7.0 00:02:00.976 SYMLINK libspdk_dma.so 00:02:01.239 SYMLINK libspdk_ioat.so 00:02:01.239 LIB libspdk_vfio_user.a 00:02:01.239 SO libspdk_vfio_user.so.5.0 00:02:01.239 LIB libspdk_util.a 00:02:01.239 SYMLINK libspdk_vfio_user.so 00:02:01.500 SO libspdk_util.so.10.1 00:02:01.500 SYMLINK libspdk_util.so 00:02:01.500 LIB libspdk_trace_parser.a 00:02:01.761 SO libspdk_trace_parser.so.6.0 00:02:01.761 SYMLINK libspdk_trace_parser.so 00:02:01.761 CC lib/idxd/idxd.o 00:02:01.761 CC lib/idxd/idxd_user.o 00:02:01.761 CC lib/idxd/idxd_kernel.o 00:02:01.761 CC lib/conf/conf.o 00:02:01.761 CC lib/vmd/vmd.o 00:02:01.761 CC lib/json/json_parse.o 00:02:01.761 CC lib/json/json_util.o 00:02:01.761 CC lib/vmd/led.o 00:02:01.761 CC lib/json/json_write.o 00:02:01.761 CC lib/rdma_utils/rdma_utils.o 00:02:01.761 CC lib/env_dpdk/env.o 00:02:01.761 CC lib/env_dpdk/memory.o 00:02:01.761 CC lib/env_dpdk/pci.o 00:02:01.761 CC lib/env_dpdk/init.o 00:02:01.761 CC lib/env_dpdk/threads.o 00:02:01.761 CC lib/env_dpdk/pci_ioat.o 00:02:01.761 CC lib/env_dpdk/pci_virtio.o 00:02:01.761 CC lib/env_dpdk/pci_vmd.o 00:02:01.761 CC lib/env_dpdk/pci_idxd.o 00:02:01.761 CC lib/env_dpdk/pci_event.o 00:02:01.761 CC lib/env_dpdk/sigbus_handler.o 00:02:01.761 CC lib/env_dpdk/pci_dpdk.o 00:02:02.023 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:02.023 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:02.285 LIB libspdk_conf.a 00:02:02.285 SO libspdk_conf.so.6.0 00:02:02.285 LIB libspdk_rdma_utils.a 00:02:02.285 LIB libspdk_json.a 00:02:02.285 SO libspdk_rdma_utils.so.1.0 00:02:02.285 SYMLINK libspdk_conf.so 00:02:02.285 SO libspdk_json.so.6.0 00:02:02.285 SYMLINK libspdk_rdma_utils.so 00:02:02.285 SYMLINK libspdk_json.so 00:02:02.545 LIB libspdk_idxd.a 00:02:02.545 SO libspdk_idxd.so.12.1 00:02:02.545 LIB libspdk_vmd.a 00:02:02.545 SO libspdk_vmd.so.6.0 00:02:02.545 SYMLINK libspdk_idxd.so 00:02:02.545 SYMLINK libspdk_vmd.so 00:02:02.806 CC lib/rdma_provider/common.o 00:02:02.806 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:02.806 CC lib/jsonrpc/jsonrpc_server.o 00:02:02.806 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:02.806 CC lib/jsonrpc/jsonrpc_client.o 00:02:02.806 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:02.806 LIB libspdk_rdma_provider.a 00:02:03.066 SO libspdk_rdma_provider.so.7.0 00:02:03.066 LIB libspdk_jsonrpc.a 00:02:03.066 SO libspdk_jsonrpc.so.6.0 00:02:03.066 SYMLINK libspdk_rdma_provider.so 00:02:03.066 SYMLINK libspdk_jsonrpc.so 00:02:03.066 LIB libspdk_env_dpdk.a 00:02:03.327 SO libspdk_env_dpdk.so.15.1 00:02:03.327 SYMLINK libspdk_env_dpdk.so 00:02:03.327 CC lib/rpc/rpc.o 00:02:03.588 LIB libspdk_rpc.a 00:02:03.588 SO libspdk_rpc.so.6.0 00:02:03.849 SYMLINK libspdk_rpc.so 00:02:04.111 CC lib/trace/trace.o 00:02:04.111 CC lib/trace/trace_flags.o 00:02:04.111 CC lib/trace/trace_rpc.o 00:02:04.111 CC lib/keyring/keyring.o 00:02:04.111 CC lib/notify/notify.o 00:02:04.111 CC lib/keyring/keyring_rpc.o 00:02:04.111 CC lib/notify/notify_rpc.o 00:02:04.373 LIB libspdk_notify.a 00:02:04.373 SO libspdk_notify.so.6.0 00:02:04.373 LIB libspdk_keyring.a 00:02:04.373 LIB libspdk_trace.a 00:02:04.373 SO libspdk_keyring.so.2.0 00:02:04.373 SO libspdk_trace.so.11.0 00:02:04.373 SYMLINK libspdk_notify.so 00:02:04.373 SYMLINK libspdk_keyring.so 00:02:04.373 SYMLINK libspdk_trace.so 00:02:04.944 CC lib/thread/thread.o 00:02:04.944 CC lib/thread/iobuf.o 00:02:04.944 CC lib/sock/sock.o 00:02:04.944 CC lib/sock/sock_rpc.o 00:02:05.205 LIB libspdk_sock.a 00:02:05.205 SO libspdk_sock.so.10.0 00:02:05.466 SYMLINK libspdk_sock.so 00:02:05.728 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:05.728 CC lib/nvme/nvme_ctrlr.o 00:02:05.728 CC lib/nvme/nvme_fabric.o 00:02:05.728 CC lib/nvme/nvme_ns_cmd.o 00:02:05.728 CC lib/nvme/nvme_ns.o 00:02:05.728 CC lib/nvme/nvme_pcie_common.o 00:02:05.728 CC lib/nvme/nvme_pcie.o 00:02:05.728 CC lib/nvme/nvme_qpair.o 00:02:05.728 CC lib/nvme/nvme.o 00:02:05.728 CC lib/nvme/nvme_quirks.o 00:02:05.728 CC lib/nvme/nvme_transport.o 00:02:05.728 CC lib/nvme/nvme_discovery.o 00:02:05.728 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:05.728 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:05.728 CC lib/nvme/nvme_tcp.o 00:02:05.728 CC lib/nvme/nvme_opal.o 00:02:05.728 CC lib/nvme/nvme_io_msg.o 00:02:05.728 CC lib/nvme/nvme_poll_group.o 00:02:05.728 CC lib/nvme/nvme_zns.o 00:02:05.728 CC lib/nvme/nvme_stubs.o 00:02:05.728 CC lib/nvme/nvme_auth.o 00:02:05.728 CC lib/nvme/nvme_cuse.o 00:02:05.728 CC lib/nvme/nvme_vfio_user.o 00:02:05.728 CC lib/nvme/nvme_rdma.o 00:02:06.300 LIB libspdk_thread.a 00:02:06.300 SO libspdk_thread.so.11.0 00:02:06.300 SYMLINK libspdk_thread.so 00:02:06.562 CC lib/virtio/virtio.o 00:02:06.562 CC lib/virtio/virtio_vfio_user.o 00:02:06.562 CC lib/virtio/virtio_vhost_user.o 00:02:06.562 CC lib/virtio/virtio_pci.o 00:02:06.562 CC lib/init/json_config.o 00:02:06.562 CC lib/init/subsystem.o 00:02:06.562 CC lib/blob/blobstore.o 00:02:06.562 CC lib/init/subsystem_rpc.o 00:02:06.562 CC lib/init/rpc.o 00:02:06.562 CC lib/vfu_tgt/tgt_endpoint.o 00:02:06.562 CC lib/blob/request.o 00:02:06.562 CC lib/fsdev/fsdev.o 00:02:06.562 CC lib/blob/zeroes.o 00:02:06.562 CC lib/vfu_tgt/tgt_rpc.o 00:02:06.562 CC lib/blob/blob_bs_dev.o 00:02:06.562 CC lib/fsdev/fsdev_io.o 00:02:06.562 CC lib/fsdev/fsdev_rpc.o 00:02:06.562 CC lib/accel/accel.o 00:02:06.562 CC lib/accel/accel_rpc.o 00:02:06.562 CC lib/accel/accel_sw.o 00:02:06.823 LIB libspdk_init.a 00:02:06.823 SO libspdk_init.so.6.0 00:02:07.084 LIB libspdk_virtio.a 00:02:07.085 LIB libspdk_vfu_tgt.a 00:02:07.085 SO libspdk_virtio.so.7.0 00:02:07.085 SYMLINK libspdk_init.so 00:02:07.085 SO libspdk_vfu_tgt.so.3.0 00:02:07.085 SYMLINK libspdk_virtio.so 00:02:07.085 SYMLINK libspdk_vfu_tgt.so 00:02:07.345 LIB libspdk_fsdev.a 00:02:07.345 SO libspdk_fsdev.so.2.0 00:02:07.345 CC lib/event/app.o 00:02:07.345 CC lib/event/reactor.o 00:02:07.345 CC lib/event/log_rpc.o 00:02:07.345 CC lib/event/app_rpc.o 00:02:07.345 CC lib/event/scheduler_static.o 00:02:07.345 SYMLINK libspdk_fsdev.so 00:02:07.607 LIB libspdk_accel.a 00:02:07.607 LIB libspdk_nvme.a 00:02:07.607 SO libspdk_accel.so.16.0 00:02:07.607 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:07.867 SYMLINK libspdk_accel.so 00:02:07.867 LIB libspdk_event.a 00:02:07.867 SO libspdk_nvme.so.15.0 00:02:07.867 SO libspdk_event.so.14.0 00:02:07.867 SYMLINK libspdk_event.so 00:02:08.129 SYMLINK libspdk_nvme.so 00:02:08.129 CC lib/bdev/bdev.o 00:02:08.129 CC lib/bdev/bdev_rpc.o 00:02:08.129 CC lib/bdev/bdev_zone.o 00:02:08.129 CC lib/bdev/part.o 00:02:08.129 CC lib/bdev/scsi_nvme.o 00:02:08.391 LIB libspdk_fuse_dispatcher.a 00:02:08.391 SO libspdk_fuse_dispatcher.so.1.0 00:02:08.391 SYMLINK libspdk_fuse_dispatcher.so 00:02:09.334 LIB libspdk_blob.a 00:02:09.334 SO libspdk_blob.so.11.0 00:02:09.334 SYMLINK libspdk_blob.so 00:02:09.905 CC lib/blobfs/blobfs.o 00:02:09.905 CC lib/lvol/lvol.o 00:02:09.905 CC lib/blobfs/tree.o 00:02:10.477 LIB libspdk_bdev.a 00:02:10.477 SO libspdk_bdev.so.17.0 00:02:10.477 LIB libspdk_blobfs.a 00:02:10.477 SO libspdk_blobfs.so.10.0 00:02:10.737 SYMLINK libspdk_bdev.so 00:02:10.737 LIB libspdk_lvol.a 00:02:10.737 SYMLINK libspdk_blobfs.so 00:02:10.737 SO libspdk_lvol.so.10.0 00:02:10.737 SYMLINK libspdk_lvol.so 00:02:10.999 CC lib/nbd/nbd.o 00:02:10.999 CC lib/nbd/nbd_rpc.o 00:02:10.999 CC lib/ublk/ublk.o 00:02:10.999 CC lib/ublk/ublk_rpc.o 00:02:10.999 CC lib/nvmf/ctrlr.o 00:02:10.999 CC lib/scsi/dev.o 00:02:10.999 CC lib/nvmf/ctrlr_discovery.o 00:02:10.999 CC lib/nvmf/ctrlr_bdev.o 00:02:10.999 CC lib/scsi/lun.o 00:02:10.999 CC lib/ftl/ftl_core.o 00:02:10.999 CC lib/nvmf/subsystem.o 00:02:10.999 CC lib/scsi/port.o 00:02:10.999 CC lib/ftl/ftl_init.o 00:02:10.999 CC lib/nvmf/nvmf.o 00:02:10.999 CC lib/scsi/scsi.o 00:02:10.999 CC lib/ftl/ftl_layout.o 00:02:10.999 CC lib/nvmf/nvmf_rpc.o 00:02:10.999 CC lib/scsi/scsi_bdev.o 00:02:10.999 CC lib/ftl/ftl_debug.o 00:02:10.999 CC lib/ftl/ftl_io.o 00:02:10.999 CC lib/scsi/scsi_pr.o 00:02:10.999 CC lib/nvmf/tcp.o 00:02:10.999 CC lib/ftl/ftl_sb.o 00:02:10.999 CC lib/nvmf/transport.o 00:02:10.999 CC lib/scsi/scsi_rpc.o 00:02:10.999 CC lib/ftl/ftl_l2p.o 00:02:10.999 CC lib/nvmf/stubs.o 00:02:10.999 CC lib/scsi/task.o 00:02:10.999 CC lib/nvmf/mdns_server.o 00:02:10.999 CC lib/ftl/ftl_l2p_flat.o 00:02:10.999 CC lib/nvmf/vfio_user.o 00:02:10.999 CC lib/ftl/ftl_nv_cache.o 00:02:10.999 CC lib/nvmf/rdma.o 00:02:10.999 CC lib/nvmf/auth.o 00:02:10.999 CC lib/ftl/ftl_band.o 00:02:10.999 CC lib/ftl/ftl_band_ops.o 00:02:10.999 CC lib/ftl/ftl_writer.o 00:02:10.999 CC lib/ftl/ftl_rq.o 00:02:10.999 CC lib/ftl/ftl_reloc.o 00:02:10.999 CC lib/ftl/ftl_l2p_cache.o 00:02:10.999 CC lib/ftl/ftl_p2l.o 00:02:10.999 CC lib/ftl/ftl_p2l_log.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:10.999 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:10.999 CC lib/ftl/utils/ftl_conf.o 00:02:10.999 CC lib/ftl/utils/ftl_md.o 00:02:10.999 CC lib/ftl/utils/ftl_bitmap.o 00:02:10.999 CC lib/ftl/utils/ftl_mempool.o 00:02:10.999 CC lib/ftl/utils/ftl_property.o 00:02:10.999 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:10.999 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:10.999 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:10.999 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:10.999 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:10.999 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:10.999 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:10.999 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:10.999 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:10.999 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:10.999 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:10.999 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:10.999 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:10.999 CC lib/ftl/base/ftl_base_dev.o 00:02:10.999 CC lib/ftl/ftl_trace.o 00:02:10.999 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.945 LIB libspdk_nbd.a 00:02:11.945 SO libspdk_nbd.so.7.0 00:02:11.945 LIB libspdk_scsi.a 00:02:11.945 SO libspdk_scsi.so.9.0 00:02:11.945 SYMLINK libspdk_nbd.so 00:02:11.945 LIB libspdk_ublk.a 00:02:11.945 SYMLINK libspdk_scsi.so 00:02:11.945 SO libspdk_ublk.so.3.0 00:02:12.207 SYMLINK libspdk_ublk.so 00:02:12.207 LIB libspdk_ftl.a 00:02:12.207 CC lib/iscsi/conn.o 00:02:12.207 CC lib/vhost/vhost.o 00:02:12.207 CC lib/iscsi/init_grp.o 00:02:12.207 CC lib/vhost/vhost_rpc.o 00:02:12.207 CC lib/iscsi/iscsi.o 00:02:12.207 CC lib/vhost/vhost_scsi.o 00:02:12.207 CC lib/iscsi/param.o 00:02:12.207 CC lib/vhost/vhost_blk.o 00:02:12.207 CC lib/iscsi/portal_grp.o 00:02:12.207 CC lib/vhost/rte_vhost_user.o 00:02:12.207 CC lib/iscsi/tgt_node.o 00:02:12.207 CC lib/iscsi/iscsi_subsystem.o 00:02:12.207 CC lib/iscsi/iscsi_rpc.o 00:02:12.207 CC lib/iscsi/task.o 00:02:12.469 SO libspdk_ftl.so.9.0 00:02:12.731 SYMLINK libspdk_ftl.so 00:02:13.305 LIB libspdk_nvmf.a 00:02:13.305 SO libspdk_nvmf.so.20.0 00:02:13.305 LIB libspdk_vhost.a 00:02:13.305 SO libspdk_vhost.so.8.0 00:02:13.568 SYMLINK libspdk_nvmf.so 00:02:13.568 SYMLINK libspdk_vhost.so 00:02:13.568 LIB libspdk_iscsi.a 00:02:13.568 SO libspdk_iscsi.so.8.0 00:02:13.830 SYMLINK libspdk_iscsi.so 00:02:14.402 CC module/env_dpdk/env_dpdk_rpc.o 00:02:14.402 CC module/vfu_device/vfu_virtio.o 00:02:14.402 CC module/vfu_device/vfu_virtio_blk.o 00:02:14.402 CC module/vfu_device/vfu_virtio_scsi.o 00:02:14.402 CC module/vfu_device/vfu_virtio_rpc.o 00:02:14.402 CC module/vfu_device/vfu_virtio_fs.o 00:02:14.402 LIB libspdk_env_dpdk_rpc.a 00:02:14.402 CC module/keyring/file/keyring.o 00:02:14.403 CC module/keyring/file/keyring_rpc.o 00:02:14.403 CC module/accel/iaa/accel_iaa.o 00:02:14.403 CC module/accel/error/accel_error.o 00:02:14.403 CC module/accel/iaa/accel_iaa_rpc.o 00:02:14.664 CC module/accel/error/accel_error_rpc.o 00:02:14.664 CC module/accel/ioat/accel_ioat.o 00:02:14.664 CC module/accel/dsa/accel_dsa.o 00:02:14.664 CC module/accel/dsa/accel_dsa_rpc.o 00:02:14.664 CC module/accel/ioat/accel_ioat_rpc.o 00:02:14.664 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:14.664 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:14.664 CC module/scheduler/gscheduler/gscheduler.o 00:02:14.664 CC module/fsdev/aio/fsdev_aio.o 00:02:14.664 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:14.664 CC module/blob/bdev/blob_bdev.o 00:02:14.665 CC module/fsdev/aio/linux_aio_mgr.o 00:02:14.665 CC module/keyring/linux/keyring.o 00:02:14.665 CC module/keyring/linux/keyring_rpc.o 00:02:14.665 CC module/sock/posix/posix.o 00:02:14.665 SO libspdk_env_dpdk_rpc.so.6.0 00:02:14.665 SYMLINK libspdk_env_dpdk_rpc.so 00:02:14.665 LIB libspdk_keyring_file.a 00:02:14.665 LIB libspdk_scheduler_gscheduler.a 00:02:14.665 LIB libspdk_keyring_linux.a 00:02:14.665 LIB libspdk_scheduler_dpdk_governor.a 00:02:14.665 SO libspdk_keyring_file.so.2.0 00:02:14.665 SO libspdk_keyring_linux.so.1.0 00:02:14.665 LIB libspdk_accel_error.a 00:02:14.665 LIB libspdk_scheduler_dynamic.a 00:02:14.665 SO libspdk_scheduler_gscheduler.so.4.0 00:02:14.665 LIB libspdk_accel_ioat.a 00:02:14.665 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:14.926 LIB libspdk_accel_iaa.a 00:02:14.926 SO libspdk_scheduler_dynamic.so.4.0 00:02:14.926 SO libspdk_accel_error.so.2.0 00:02:14.926 SO libspdk_accel_ioat.so.6.0 00:02:14.926 SYMLINK libspdk_keyring_linux.so 00:02:14.926 SO libspdk_accel_iaa.so.3.0 00:02:14.926 SYMLINK libspdk_keyring_file.so 00:02:14.926 LIB libspdk_accel_dsa.a 00:02:14.926 SYMLINK libspdk_scheduler_gscheduler.so 00:02:14.926 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:14.926 LIB libspdk_blob_bdev.a 00:02:14.926 SYMLINK libspdk_scheduler_dynamic.so 00:02:14.926 SO libspdk_accel_dsa.so.5.0 00:02:14.926 SYMLINK libspdk_accel_error.so 00:02:14.926 SYMLINK libspdk_accel_ioat.so 00:02:14.926 SO libspdk_blob_bdev.so.11.0 00:02:14.926 SYMLINK libspdk_accel_iaa.so 00:02:14.926 LIB libspdk_vfu_device.a 00:02:14.926 SYMLINK libspdk_accel_dsa.so 00:02:14.926 SYMLINK libspdk_blob_bdev.so 00:02:14.926 SO libspdk_vfu_device.so.3.0 00:02:15.187 SYMLINK libspdk_vfu_device.so 00:02:15.187 LIB libspdk_fsdev_aio.a 00:02:15.187 SO libspdk_fsdev_aio.so.1.0 00:02:15.187 LIB libspdk_sock_posix.a 00:02:15.448 SO libspdk_sock_posix.so.6.0 00:02:15.448 SYMLINK libspdk_fsdev_aio.so 00:02:15.448 SYMLINK libspdk_sock_posix.so 00:02:15.448 CC module/bdev/malloc/bdev_malloc.o 00:02:15.448 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:15.448 CC module/bdev/gpt/gpt.o 00:02:15.448 CC module/bdev/gpt/vbdev_gpt.o 00:02:15.448 CC module/bdev/delay/vbdev_delay.o 00:02:15.448 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:15.448 CC module/bdev/passthru/vbdev_passthru.o 00:02:15.448 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:15.448 CC module/bdev/error/vbdev_error.o 00:02:15.448 CC module/blobfs/bdev/blobfs_bdev.o 00:02:15.448 CC module/bdev/error/vbdev_error_rpc.o 00:02:15.448 CC module/bdev/nvme/bdev_nvme.o 00:02:15.448 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:15.448 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:15.448 CC module/bdev/nvme/nvme_rpc.o 00:02:15.448 CC module/bdev/nvme/vbdev_opal.o 00:02:15.448 CC module/bdev/nvme/bdev_mdns_client.o 00:02:15.448 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:15.448 CC module/bdev/null/bdev_null.o 00:02:15.448 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:15.448 CC module/bdev/null/bdev_null_rpc.o 00:02:15.448 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:15.448 CC module/bdev/lvol/vbdev_lvol.o 00:02:15.448 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:15.448 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:15.448 CC module/bdev/iscsi/bdev_iscsi.o 00:02:15.448 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:15.448 CC module/bdev/aio/bdev_aio.o 00:02:15.448 CC module/bdev/ftl/bdev_ftl.o 00:02:15.448 CC module/bdev/aio/bdev_aio_rpc.o 00:02:15.448 CC module/bdev/raid/bdev_raid.o 00:02:15.448 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:15.448 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:15.448 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:15.448 CC module/bdev/split/vbdev_split.o 00:02:15.708 CC module/bdev/raid/bdev_raid_rpc.o 00:02:15.708 CC module/bdev/split/vbdev_split_rpc.o 00:02:15.708 CC module/bdev/raid/bdev_raid_sb.o 00:02:15.708 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:15.708 CC module/bdev/raid/raid0.o 00:02:15.708 CC module/bdev/raid/concat.o 00:02:15.708 CC module/bdev/raid/raid1.o 00:02:15.970 LIB libspdk_blobfs_bdev.a 00:02:15.971 SO libspdk_blobfs_bdev.so.6.0 00:02:15.971 LIB libspdk_bdev_split.a 00:02:15.971 LIB libspdk_bdev_error.a 00:02:15.971 SO libspdk_bdev_split.so.6.0 00:02:15.971 LIB libspdk_bdev_gpt.a 00:02:15.971 SYMLINK libspdk_blobfs_bdev.so 00:02:15.971 LIB libspdk_bdev_null.a 00:02:15.971 SO libspdk_bdev_error.so.6.0 00:02:15.971 LIB libspdk_bdev_passthru.a 00:02:15.971 SO libspdk_bdev_gpt.so.6.0 00:02:15.971 LIB libspdk_bdev_ftl.a 00:02:15.971 SO libspdk_bdev_null.so.6.0 00:02:15.971 SO libspdk_bdev_passthru.so.6.0 00:02:15.971 SYMLINK libspdk_bdev_split.so 00:02:15.971 LIB libspdk_bdev_malloc.a 00:02:15.971 SYMLINK libspdk_bdev_error.so 00:02:15.971 LIB libspdk_bdev_zone_block.a 00:02:15.971 SO libspdk_bdev_ftl.so.6.0 00:02:15.971 LIB libspdk_bdev_delay.a 00:02:15.971 LIB libspdk_bdev_iscsi.a 00:02:15.971 SYMLINK libspdk_bdev_gpt.so 00:02:15.971 LIB libspdk_bdev_aio.a 00:02:15.971 SO libspdk_bdev_malloc.so.6.0 00:02:15.971 SYMLINK libspdk_bdev_null.so 00:02:15.971 SO libspdk_bdev_zone_block.so.6.0 00:02:15.971 SO libspdk_bdev_delay.so.6.0 00:02:15.971 SYMLINK libspdk_bdev_passthru.so 00:02:15.971 SO libspdk_bdev_iscsi.so.6.0 00:02:16.231 SO libspdk_bdev_aio.so.6.0 00:02:16.232 SYMLINK libspdk_bdev_ftl.so 00:02:16.232 SYMLINK libspdk_bdev_malloc.so 00:02:16.232 SYMLINK libspdk_bdev_zone_block.so 00:02:16.232 SYMLINK libspdk_bdev_delay.so 00:02:16.232 LIB libspdk_bdev_lvol.a 00:02:16.232 SYMLINK libspdk_bdev_iscsi.so 00:02:16.232 SYMLINK libspdk_bdev_aio.so 00:02:16.232 LIB libspdk_bdev_virtio.a 00:02:16.232 SO libspdk_bdev_lvol.so.6.0 00:02:16.232 SO libspdk_bdev_virtio.so.6.0 00:02:16.232 SYMLINK libspdk_bdev_lvol.so 00:02:16.232 SYMLINK libspdk_bdev_virtio.so 00:02:16.802 LIB libspdk_bdev_raid.a 00:02:16.802 SO libspdk_bdev_raid.so.6.0 00:02:16.802 SYMLINK libspdk_bdev_raid.so 00:02:18.186 LIB libspdk_bdev_nvme.a 00:02:18.186 SO libspdk_bdev_nvme.so.7.1 00:02:18.186 SYMLINK libspdk_bdev_nvme.so 00:02:18.759 CC module/event/subsystems/iobuf/iobuf.o 00:02:18.759 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:18.759 CC module/event/subsystems/vmd/vmd.o 00:02:18.759 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:18.759 CC module/event/subsystems/keyring/keyring.o 00:02:18.759 CC module/event/subsystems/sock/sock.o 00:02:18.759 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:18.759 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:18.759 CC module/event/subsystems/fsdev/fsdev.o 00:02:18.759 CC module/event/subsystems/scheduler/scheduler.o 00:02:19.021 LIB libspdk_event_keyring.a 00:02:19.021 LIB libspdk_event_vmd.a 00:02:19.021 LIB libspdk_event_iobuf.a 00:02:19.021 LIB libspdk_event_fsdev.a 00:02:19.021 LIB libspdk_event_vhost_blk.a 00:02:19.021 LIB libspdk_event_sock.a 00:02:19.021 LIB libspdk_event_vfu_tgt.a 00:02:19.021 LIB libspdk_event_scheduler.a 00:02:19.021 SO libspdk_event_vmd.so.6.0 00:02:19.021 SO libspdk_event_fsdev.so.1.0 00:02:19.021 SO libspdk_event_iobuf.so.3.0 00:02:19.021 SO libspdk_event_keyring.so.1.0 00:02:19.021 SO libspdk_event_vhost_blk.so.3.0 00:02:19.021 SO libspdk_event_vfu_tgt.so.3.0 00:02:19.021 SO libspdk_event_sock.so.5.0 00:02:19.021 SO libspdk_event_scheduler.so.4.0 00:02:19.021 SYMLINK libspdk_event_keyring.so 00:02:19.021 SYMLINK libspdk_event_vhost_blk.so 00:02:19.021 SYMLINK libspdk_event_fsdev.so 00:02:19.021 SYMLINK libspdk_event_iobuf.so 00:02:19.021 SYMLINK libspdk_event_sock.so 00:02:19.021 SYMLINK libspdk_event_scheduler.so 00:02:19.021 SYMLINK libspdk_event_vmd.so 00:02:19.021 SYMLINK libspdk_event_vfu_tgt.so 00:02:19.593 CC module/event/subsystems/accel/accel.o 00:02:19.593 LIB libspdk_event_accel.a 00:02:19.593 SO libspdk_event_accel.so.6.0 00:02:19.593 SYMLINK libspdk_event_accel.so 00:02:20.165 CC module/event/subsystems/bdev/bdev.o 00:02:20.165 LIB libspdk_event_bdev.a 00:02:20.165 SO libspdk_event_bdev.so.6.0 00:02:20.426 SYMLINK libspdk_event_bdev.so 00:02:20.686 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:20.686 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:20.686 CC module/event/subsystems/scsi/scsi.o 00:02:20.686 CC module/event/subsystems/nbd/nbd.o 00:02:20.686 CC module/event/subsystems/ublk/ublk.o 00:02:20.946 LIB libspdk_event_nbd.a 00:02:20.946 LIB libspdk_event_ublk.a 00:02:20.947 LIB libspdk_event_scsi.a 00:02:20.947 SO libspdk_event_nbd.so.6.0 00:02:20.947 SO libspdk_event_ublk.so.3.0 00:02:20.947 SO libspdk_event_scsi.so.6.0 00:02:20.947 LIB libspdk_event_nvmf.a 00:02:20.947 SO libspdk_event_nvmf.so.6.0 00:02:20.947 SYMLINK libspdk_event_nbd.so 00:02:20.947 SYMLINK libspdk_event_ublk.so 00:02:20.947 SYMLINK libspdk_event_scsi.so 00:02:20.947 SYMLINK libspdk_event_nvmf.so 00:02:21.207 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:21.207 CC module/event/subsystems/iscsi/iscsi.o 00:02:21.468 LIB libspdk_event_vhost_scsi.a 00:02:21.468 LIB libspdk_event_iscsi.a 00:02:21.468 SO libspdk_event_vhost_scsi.so.3.0 00:02:21.468 SO libspdk_event_iscsi.so.6.0 00:02:21.728 SYMLINK libspdk_event_vhost_scsi.so 00:02:21.728 SYMLINK libspdk_event_iscsi.so 00:02:21.728 SO libspdk.so.6.0 00:02:21.728 SYMLINK libspdk.so 00:02:22.302 CXX app/trace/trace.o 00:02:22.302 CC app/trace_record/trace_record.o 00:02:22.302 CC test/rpc_client/rpc_client_test.o 00:02:22.302 CC app/spdk_nvme_perf/perf.o 00:02:22.302 CC app/spdk_top/spdk_top.o 00:02:22.302 CC app/spdk_nvme_identify/identify.o 00:02:22.302 CC app/spdk_lspci/spdk_lspci.o 00:02:22.302 TEST_HEADER include/spdk/accel.h 00:02:22.302 TEST_HEADER include/spdk/accel_module.h 00:02:22.302 TEST_HEADER include/spdk/assert.h 00:02:22.302 CC app/spdk_nvme_discover/discovery_aer.o 00:02:22.302 TEST_HEADER include/spdk/barrier.h 00:02:22.302 TEST_HEADER include/spdk/base64.h 00:02:22.302 TEST_HEADER include/spdk/bdev.h 00:02:22.302 TEST_HEADER include/spdk/bdev_module.h 00:02:22.302 TEST_HEADER include/spdk/bit_array.h 00:02:22.302 TEST_HEADER include/spdk/bdev_zone.h 00:02:22.302 TEST_HEADER include/spdk/bit_pool.h 00:02:22.302 TEST_HEADER include/spdk/blob_bdev.h 00:02:22.302 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:22.302 TEST_HEADER include/spdk/blobfs.h 00:02:22.302 TEST_HEADER include/spdk/blob.h 00:02:22.302 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:22.302 TEST_HEADER include/spdk/conf.h 00:02:22.302 TEST_HEADER include/spdk/config.h 00:02:22.302 TEST_HEADER include/spdk/cpuset.h 00:02:22.302 TEST_HEADER include/spdk/crc16.h 00:02:22.302 TEST_HEADER include/spdk/crc32.h 00:02:22.302 TEST_HEADER include/spdk/crc64.h 00:02:22.302 TEST_HEADER include/spdk/dif.h 00:02:22.302 TEST_HEADER include/spdk/endian.h 00:02:22.302 TEST_HEADER include/spdk/dma.h 00:02:22.302 TEST_HEADER include/spdk/env_dpdk.h 00:02:22.302 TEST_HEADER include/spdk/env.h 00:02:22.302 TEST_HEADER include/spdk/event.h 00:02:22.302 TEST_HEADER include/spdk/fd.h 00:02:22.302 TEST_HEADER include/spdk/fd_group.h 00:02:22.302 TEST_HEADER include/spdk/file.h 00:02:22.302 TEST_HEADER include/spdk/fsdev.h 00:02:22.302 TEST_HEADER include/spdk/fsdev_module.h 00:02:22.302 CC app/nvmf_tgt/nvmf_main.o 00:02:22.302 TEST_HEADER include/spdk/ftl.h 00:02:22.302 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:22.302 TEST_HEADER include/spdk/gpt_spec.h 00:02:22.302 TEST_HEADER include/spdk/hexlify.h 00:02:22.302 TEST_HEADER include/spdk/histogram_data.h 00:02:22.302 CC app/spdk_dd/spdk_dd.o 00:02:22.302 TEST_HEADER include/spdk/idxd.h 00:02:22.302 TEST_HEADER include/spdk/init.h 00:02:22.302 CC app/iscsi_tgt/iscsi_tgt.o 00:02:22.302 TEST_HEADER include/spdk/idxd_spec.h 00:02:22.302 TEST_HEADER include/spdk/ioat.h 00:02:22.302 TEST_HEADER include/spdk/ioat_spec.h 00:02:22.302 TEST_HEADER include/spdk/iscsi_spec.h 00:02:22.302 TEST_HEADER include/spdk/json.h 00:02:22.302 TEST_HEADER include/spdk/keyring.h 00:02:22.302 TEST_HEADER include/spdk/jsonrpc.h 00:02:22.302 TEST_HEADER include/spdk/keyring_module.h 00:02:22.302 TEST_HEADER include/spdk/likely.h 00:02:22.302 TEST_HEADER include/spdk/log.h 00:02:22.302 TEST_HEADER include/spdk/lvol.h 00:02:22.302 TEST_HEADER include/spdk/md5.h 00:02:22.302 TEST_HEADER include/spdk/mmio.h 00:02:22.302 TEST_HEADER include/spdk/memory.h 00:02:22.302 TEST_HEADER include/spdk/nbd.h 00:02:22.302 CC app/spdk_tgt/spdk_tgt.o 00:02:22.302 TEST_HEADER include/spdk/notify.h 00:02:22.302 TEST_HEADER include/spdk/net.h 00:02:22.302 TEST_HEADER include/spdk/nvme_intel.h 00:02:22.302 TEST_HEADER include/spdk/nvme.h 00:02:22.302 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:22.302 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:22.302 TEST_HEADER include/spdk/nvme_zns.h 00:02:22.302 TEST_HEADER include/spdk/nvme_spec.h 00:02:22.302 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:22.302 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:22.302 TEST_HEADER include/spdk/nvmf.h 00:02:22.302 TEST_HEADER include/spdk/nvmf_transport.h 00:02:22.302 TEST_HEADER include/spdk/opal.h 00:02:22.302 TEST_HEADER include/spdk/nvmf_spec.h 00:02:22.302 TEST_HEADER include/spdk/opal_spec.h 00:02:22.302 TEST_HEADER include/spdk/pci_ids.h 00:02:22.302 TEST_HEADER include/spdk/pipe.h 00:02:22.302 TEST_HEADER include/spdk/queue.h 00:02:22.302 TEST_HEADER include/spdk/reduce.h 00:02:22.302 TEST_HEADER include/spdk/rpc.h 00:02:22.302 TEST_HEADER include/spdk/scheduler.h 00:02:22.302 TEST_HEADER include/spdk/scsi.h 00:02:22.302 TEST_HEADER include/spdk/scsi_spec.h 00:02:22.302 TEST_HEADER include/spdk/stdinc.h 00:02:22.302 TEST_HEADER include/spdk/string.h 00:02:22.302 TEST_HEADER include/spdk/thread.h 00:02:22.302 TEST_HEADER include/spdk/sock.h 00:02:22.302 TEST_HEADER include/spdk/trace_parser.h 00:02:22.302 TEST_HEADER include/spdk/trace.h 00:02:22.302 TEST_HEADER include/spdk/tree.h 00:02:22.302 TEST_HEADER include/spdk/ublk.h 00:02:22.302 TEST_HEADER include/spdk/util.h 00:02:22.302 TEST_HEADER include/spdk/uuid.h 00:02:22.302 TEST_HEADER include/spdk/version.h 00:02:22.302 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:22.302 TEST_HEADER include/spdk/vmd.h 00:02:22.302 TEST_HEADER include/spdk/vhost.h 00:02:22.302 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:22.302 TEST_HEADER include/spdk/zipf.h 00:02:22.302 TEST_HEADER include/spdk/xor.h 00:02:22.302 CXX test/cpp_headers/accel.o 00:02:22.302 CXX test/cpp_headers/accel_module.o 00:02:22.302 CXX test/cpp_headers/barrier.o 00:02:22.302 CXX test/cpp_headers/assert.o 00:02:22.302 CXX test/cpp_headers/base64.o 00:02:22.302 CXX test/cpp_headers/bdev.o 00:02:22.302 CXX test/cpp_headers/bdev_zone.o 00:02:22.302 CXX test/cpp_headers/bdev_module.o 00:02:22.302 CXX test/cpp_headers/bit_array.o 00:02:22.302 CXX test/cpp_headers/blob_bdev.o 00:02:22.303 CXX test/cpp_headers/bit_pool.o 00:02:22.303 CXX test/cpp_headers/blobfs_bdev.o 00:02:22.303 CXX test/cpp_headers/blob.o 00:02:22.303 CXX test/cpp_headers/conf.o 00:02:22.303 CXX test/cpp_headers/blobfs.o 00:02:22.303 CXX test/cpp_headers/cpuset.o 00:02:22.303 CXX test/cpp_headers/config.o 00:02:22.303 CXX test/cpp_headers/crc16.o 00:02:22.303 CXX test/cpp_headers/crc64.o 00:02:22.303 CXX test/cpp_headers/dma.o 00:02:22.303 CXX test/cpp_headers/dif.o 00:02:22.574 CXX test/cpp_headers/crc32.o 00:02:22.574 CXX test/cpp_headers/endian.o 00:02:22.574 CXX test/cpp_headers/env_dpdk.o 00:02:22.574 CXX test/cpp_headers/event.o 00:02:22.574 CXX test/cpp_headers/env.o 00:02:22.574 CXX test/cpp_headers/fd_group.o 00:02:22.574 CXX test/cpp_headers/file.o 00:02:22.574 CXX test/cpp_headers/fsdev_module.o 00:02:22.574 CXX test/cpp_headers/fd.o 00:02:22.574 CXX test/cpp_headers/fsdev.o 00:02:22.574 CXX test/cpp_headers/fuse_dispatcher.o 00:02:22.574 CXX test/cpp_headers/ftl.o 00:02:22.574 CXX test/cpp_headers/gpt_spec.o 00:02:22.574 CXX test/cpp_headers/hexlify.o 00:02:22.574 CXX test/cpp_headers/histogram_data.o 00:02:22.574 CXX test/cpp_headers/idxd.o 00:02:22.574 CXX test/cpp_headers/idxd_spec.o 00:02:22.574 CXX test/cpp_headers/ioat.o 00:02:22.574 CXX test/cpp_headers/init.o 00:02:22.574 CXX test/cpp_headers/ioat_spec.o 00:02:22.574 CXX test/cpp_headers/json.o 00:02:22.574 CXX test/cpp_headers/iscsi_spec.o 00:02:22.574 CXX test/cpp_headers/jsonrpc.o 00:02:22.574 CXX test/cpp_headers/keyring.o 00:02:22.574 CXX test/cpp_headers/likely.o 00:02:22.574 CXX test/cpp_headers/lvol.o 00:02:22.574 CXX test/cpp_headers/log.o 00:02:22.574 CXX test/cpp_headers/keyring_module.o 00:02:22.574 CXX test/cpp_headers/mmio.o 00:02:22.574 CXX test/cpp_headers/md5.o 00:02:22.574 CXX test/cpp_headers/nbd.o 00:02:22.574 CXX test/cpp_headers/memory.o 00:02:22.574 CXX test/cpp_headers/net.o 00:02:22.574 CXX test/cpp_headers/nvme.o 00:02:22.574 CXX test/cpp_headers/nvme_intel.o 00:02:22.574 CXX test/cpp_headers/notify.o 00:02:22.574 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:22.574 CXX test/cpp_headers/nvme_spec.o 00:02:22.574 CXX test/cpp_headers/nvme_zns.o 00:02:22.574 CXX test/cpp_headers/nvme_ocssd.o 00:02:22.574 CXX test/cpp_headers/nvmf_cmd.o 00:02:22.574 CXX test/cpp_headers/nvmf_spec.o 00:02:22.574 CXX test/cpp_headers/nvmf.o 00:02:22.574 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:22.574 CC test/dma/test_dma/test_dma.o 00:02:22.574 CXX test/cpp_headers/nvmf_transport.o 00:02:22.574 CXX test/cpp_headers/opal.o 00:02:22.574 CC test/app/jsoncat/jsoncat.o 00:02:22.574 CXX test/cpp_headers/pci_ids.o 00:02:22.574 CXX test/cpp_headers/pipe.o 00:02:22.574 CXX test/cpp_headers/opal_spec.o 00:02:22.574 CXX test/cpp_headers/rpc.o 00:02:22.574 CXX test/cpp_headers/queue.o 00:02:22.574 CXX test/cpp_headers/scheduler.o 00:02:22.574 CC test/thread/poller_perf/poller_perf.o 00:02:22.574 CXX test/cpp_headers/scsi.o 00:02:22.574 CXX test/cpp_headers/reduce.o 00:02:22.574 CXX test/cpp_headers/scsi_spec.o 00:02:22.574 CC examples/util/zipf/zipf.o 00:02:22.574 CXX test/cpp_headers/stdinc.o 00:02:22.574 CXX test/cpp_headers/sock.o 00:02:22.574 CXX test/cpp_headers/trace.o 00:02:22.574 CXX test/cpp_headers/string.o 00:02:22.574 CXX test/cpp_headers/trace_parser.o 00:02:22.574 CXX test/cpp_headers/util.o 00:02:22.574 CXX test/cpp_headers/tree.o 00:02:22.574 CC examples/ioat/perf/perf.o 00:02:22.574 CXX test/cpp_headers/ublk.o 00:02:22.574 CXX test/cpp_headers/thread.o 00:02:22.574 CXX test/cpp_headers/uuid.o 00:02:22.574 CC test/app/histogram_perf/histogram_perf.o 00:02:22.574 CXX test/cpp_headers/version.o 00:02:22.574 CC test/app/stub/stub.o 00:02:22.574 CXX test/cpp_headers/vhost.o 00:02:22.574 CC examples/ioat/verify/verify.o 00:02:22.574 CXX test/cpp_headers/vfio_user_pci.o 00:02:22.574 CC app/fio/nvme/fio_plugin.o 00:02:22.574 CXX test/cpp_headers/xor.o 00:02:22.574 CXX test/cpp_headers/vfio_user_spec.o 00:02:22.574 CXX test/cpp_headers/vmd.o 00:02:22.574 CC test/env/memory/memory_ut.o 00:02:22.574 CXX test/cpp_headers/zipf.o 00:02:22.574 CC test/env/vtophys/vtophys.o 00:02:22.574 CC test/env/pci/pci_ut.o 00:02:22.574 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:22.574 LINK rpc_client_test 00:02:22.574 CC app/fio/bdev/fio_plugin.o 00:02:22.574 CC test/app/bdev_svc/bdev_svc.o 00:02:22.574 LINK spdk_lspci 00:02:22.840 LINK spdk_nvme_discover 00:02:22.840 LINK spdk_trace_record 00:02:22.840 LINK iscsi_tgt 00:02:22.840 LINK interrupt_tgt 00:02:23.113 LINK spdk_tgt 00:02:23.113 LINK nvmf_tgt 00:02:23.113 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:23.113 LINK spdk_dd 00:02:23.378 CC test/env/mem_callbacks/mem_callbacks.o 00:02:23.378 LINK histogram_perf 00:02:23.378 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:23.378 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:23.378 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:23.378 LINK env_dpdk_post_init 00:02:23.639 LINK spdk_trace 00:02:23.639 LINK jsoncat 00:02:23.639 LINK zipf 00:02:23.639 LINK test_dma 00:02:23.639 LINK bdev_svc 00:02:23.639 LINK ioat_perf 00:02:23.901 LINK poller_perf 00:02:23.901 LINK vtophys 00:02:23.901 LINK stub 00:02:23.901 LINK verify 00:02:23.901 LINK nvme_fuzz 00:02:23.901 CC app/vhost/vhost.o 00:02:23.901 LINK vhost_fuzz 00:02:24.164 LINK mem_callbacks 00:02:24.164 CC examples/vmd/lsvmd/lsvmd.o 00:02:24.164 CC examples/idxd/perf/perf.o 00:02:24.164 CC examples/vmd/led/led.o 00:02:24.164 CC examples/sock/hello_world/hello_sock.o 00:02:24.164 LINK vhost 00:02:24.164 LINK pci_ut 00:02:24.164 CC examples/thread/thread/thread_ex.o 00:02:24.423 LINK spdk_bdev 00:02:24.423 LINK spdk_nvme 00:02:24.423 LINK spdk_nvme_perf 00:02:24.423 CC test/nvme/aer/aer.o 00:02:24.423 CC test/nvme/connect_stress/connect_stress.o 00:02:24.423 CC test/nvme/reset/reset.o 00:02:24.423 CC test/nvme/startup/startup.o 00:02:24.423 CC test/nvme/err_injection/err_injection.o 00:02:24.423 CC test/nvme/overhead/overhead.o 00:02:24.423 CC test/nvme/cuse/cuse.o 00:02:24.423 CC test/nvme/sgl/sgl.o 00:02:24.423 CC test/nvme/simple_copy/simple_copy.o 00:02:24.423 CC test/nvme/boot_partition/boot_partition.o 00:02:24.423 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:24.423 CC test/nvme/fused_ordering/fused_ordering.o 00:02:24.423 CC test/nvme/reserve/reserve.o 00:02:24.423 CC test/nvme/e2edp/nvme_dp.o 00:02:24.423 CC test/nvme/fdp/fdp.o 00:02:24.423 CC test/nvme/compliance/nvme_compliance.o 00:02:24.423 LINK spdk_nvme_identify 00:02:24.423 LINK lsvmd 00:02:24.423 CC test/event/reactor/reactor.o 00:02:24.423 CC test/event/reactor_perf/reactor_perf.o 00:02:24.423 CC test/event/event_perf/event_perf.o 00:02:24.423 LINK led 00:02:24.423 CC test/event/app_repeat/app_repeat.o 00:02:24.423 CC test/event/scheduler/scheduler.o 00:02:24.423 CC test/blobfs/mkfs/mkfs.o 00:02:24.423 CC test/accel/dif/dif.o 00:02:24.423 LINK spdk_top 00:02:24.423 LINK memory_ut 00:02:24.683 LINK hello_sock 00:02:24.683 LINK connect_stress 00:02:24.683 LINK startup 00:02:24.683 LINK err_injection 00:02:24.683 LINK idxd_perf 00:02:24.683 LINK thread 00:02:24.683 CC test/lvol/esnap/esnap.o 00:02:24.683 LINK boot_partition 00:02:24.683 LINK doorbell_aers 00:02:24.683 LINK reactor 00:02:24.683 LINK fused_ordering 00:02:24.683 LINK event_perf 00:02:24.683 LINK reactor_perf 00:02:24.683 LINK reserve 00:02:24.683 LINK simple_copy 00:02:24.683 LINK app_repeat 00:02:24.683 LINK sgl 00:02:24.683 LINK aer 00:02:24.683 LINK reset 00:02:24.683 LINK nvme_dp 00:02:24.683 LINK overhead 00:02:24.683 LINK nvme_compliance 00:02:24.683 LINK scheduler 00:02:24.683 LINK mkfs 00:02:24.683 LINK fdp 00:02:24.944 CC examples/nvme/abort/abort.o 00:02:24.944 CC examples/nvme/hello_world/hello_world.o 00:02:24.944 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.204 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.204 CC examples/nvme/reconnect/reconnect.o 00:02:25.204 CC examples/nvme/arbitration/arbitration.o 00:02:25.204 CC examples/nvme/hotplug/hotplug.o 00:02:25.204 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.204 LINK dif 00:02:25.204 LINK iscsi_fuzz 00:02:25.204 CC examples/accel/perf/accel_perf.o 00:02:25.204 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:25.204 CC examples/blob/cli/blobcli.o 00:02:25.204 CC examples/blob/hello_world/hello_blob.o 00:02:25.204 LINK pmr_persistence 00:02:25.204 LINK cmb_copy 00:02:25.204 LINK hello_world 00:02:25.465 LINK hotplug 00:02:25.465 LINK arbitration 00:02:25.465 LINK abort 00:02:25.465 LINK reconnect 00:02:25.465 LINK nvme_manage 00:02:25.465 LINK hello_blob 00:02:25.465 LINK hello_fsdev 00:02:25.726 LINK cuse 00:02:25.726 LINK accel_perf 00:02:25.726 LINK blobcli 00:02:25.726 CC test/bdev/bdevio/bdevio.o 00:02:26.298 LINK bdevio 00:02:26.298 CC examples/bdev/hello_world/hello_bdev.o 00:02:26.298 CC examples/bdev/bdevperf/bdevperf.o 00:02:26.559 LINK hello_bdev 00:02:27.130 LINK bdevperf 00:02:27.702 CC examples/nvmf/nvmf/nvmf.o 00:02:27.973 LINK nvmf 00:02:29.364 LINK esnap 00:02:29.624 00:02:29.624 real 0m56.477s 00:02:29.624 user 8m10.491s 00:02:29.624 sys 6m9.593s 00:02:29.624 12:59:11 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:29.624 12:59:11 make -- common/autotest_common.sh@10 -- $ set +x 00:02:29.624 ************************************ 00:02:29.625 END TEST make 00:02:29.625 ************************************ 00:02:29.625 12:59:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:29.625 12:59:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:29.625 12:59:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:29.625 12:59:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.625 12:59:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:29.625 12:59:11 -- pm/common@44 -- $ pid=1395278 00:02:29.625 12:59:11 -- pm/common@50 -- $ kill -TERM 1395278 00:02:29.625 12:59:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.625 12:59:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:29.625 12:59:11 -- pm/common@44 -- $ pid=1395279 00:02:29.625 12:59:11 -- pm/common@50 -- $ kill -TERM 1395279 00:02:29.625 12:59:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.625 12:59:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:29.625 12:59:11 -- pm/common@44 -- $ pid=1395281 00:02:29.625 12:59:11 -- pm/common@50 -- $ kill -TERM 1395281 00:02:29.625 12:59:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.625 12:59:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:29.625 12:59:11 -- pm/common@44 -- $ pid=1395304 00:02:29.625 12:59:11 -- pm/common@50 -- $ sudo -E kill -TERM 1395304 00:02:29.625 12:59:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:29.625 12:59:11 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:29.625 12:59:11 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:29.625 12:59:11 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:29.625 12:59:11 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:29.886 12:59:11 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:29.886 12:59:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:29.886 12:59:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:29.886 12:59:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:29.886 12:59:11 -- scripts/common.sh@336 -- # IFS=.-: 00:02:29.886 12:59:11 -- scripts/common.sh@336 -- # read -ra ver1 00:02:29.886 12:59:11 -- scripts/common.sh@337 -- # IFS=.-: 00:02:29.886 12:59:11 -- scripts/common.sh@337 -- # read -ra ver2 00:02:29.886 12:59:11 -- scripts/common.sh@338 -- # local 'op=<' 00:02:29.886 12:59:11 -- scripts/common.sh@340 -- # ver1_l=2 00:02:29.886 12:59:11 -- scripts/common.sh@341 -- # ver2_l=1 00:02:29.886 12:59:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:29.886 12:59:11 -- scripts/common.sh@344 -- # case "$op" in 00:02:29.886 12:59:11 -- scripts/common.sh@345 -- # : 1 00:02:29.886 12:59:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:29.886 12:59:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.886 12:59:11 -- scripts/common.sh@365 -- # decimal 1 00:02:29.886 12:59:11 -- scripts/common.sh@353 -- # local d=1 00:02:29.886 12:59:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:29.886 12:59:11 -- scripts/common.sh@355 -- # echo 1 00:02:29.886 12:59:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:29.886 12:59:11 -- scripts/common.sh@366 -- # decimal 2 00:02:29.886 12:59:11 -- scripts/common.sh@353 -- # local d=2 00:02:29.886 12:59:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:29.886 12:59:11 -- scripts/common.sh@355 -- # echo 2 00:02:29.886 12:59:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:29.886 12:59:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:29.886 12:59:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:29.886 12:59:11 -- scripts/common.sh@368 -- # return 0 00:02:29.886 12:59:11 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:29.886 12:59:11 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.886 --rc genhtml_branch_coverage=1 00:02:29.886 --rc genhtml_function_coverage=1 00:02:29.886 --rc genhtml_legend=1 00:02:29.886 --rc geninfo_all_blocks=1 00:02:29.886 --rc geninfo_unexecuted_blocks=1 00:02:29.886 00:02:29.886 ' 00:02:29.886 12:59:11 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.886 --rc genhtml_branch_coverage=1 00:02:29.887 --rc genhtml_function_coverage=1 00:02:29.887 --rc genhtml_legend=1 00:02:29.887 --rc geninfo_all_blocks=1 00:02:29.887 --rc geninfo_unexecuted_blocks=1 00:02:29.887 00:02:29.887 ' 00:02:29.887 12:59:11 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:29.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.887 --rc genhtml_branch_coverage=1 00:02:29.887 --rc genhtml_function_coverage=1 00:02:29.887 --rc genhtml_legend=1 00:02:29.887 --rc geninfo_all_blocks=1 00:02:29.887 --rc geninfo_unexecuted_blocks=1 00:02:29.887 00:02:29.887 ' 00:02:29.887 12:59:11 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:29.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:29.887 --rc genhtml_branch_coverage=1 00:02:29.887 --rc genhtml_function_coverage=1 00:02:29.887 --rc genhtml_legend=1 00:02:29.887 --rc geninfo_all_blocks=1 00:02:29.887 --rc geninfo_unexecuted_blocks=1 00:02:29.887 00:02:29.887 ' 00:02:29.887 12:59:11 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:29.887 12:59:11 -- nvmf/common.sh@7 -- # uname -s 00:02:29.887 12:59:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:29.887 12:59:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:29.887 12:59:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:29.887 12:59:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:29.887 12:59:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:29.887 12:59:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:29.887 12:59:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:29.887 12:59:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:29.887 12:59:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:29.887 12:59:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:29.887 12:59:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:29.887 12:59:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:29.887 12:59:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:29.887 12:59:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:29.887 12:59:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:29.887 12:59:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:29.887 12:59:11 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:29.887 12:59:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:29.887 12:59:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:29.887 12:59:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.887 12:59:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.887 12:59:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.887 12:59:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.887 12:59:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.887 12:59:11 -- paths/export.sh@5 -- # export PATH 00:02:29.887 12:59:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.887 12:59:11 -- nvmf/common.sh@51 -- # : 0 00:02:29.887 12:59:11 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:29.887 12:59:11 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:29.887 12:59:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:29.887 12:59:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:29.887 12:59:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:29.887 12:59:11 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:29.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:29.887 12:59:11 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:29.887 12:59:11 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:29.887 12:59:11 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:29.887 12:59:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:29.887 12:59:11 -- spdk/autotest.sh@32 -- # uname -s 00:02:29.887 12:59:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:29.887 12:59:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:29.887 12:59:11 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.887 12:59:11 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:29.887 12:59:11 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.887 12:59:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:29.887 12:59:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:29.887 12:59:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:29.887 12:59:11 -- spdk/autotest.sh@48 -- # udevadm_pid=1460837 00:02:29.887 12:59:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:29.887 12:59:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:29.887 12:59:11 -- pm/common@17 -- # local monitor 00:02:29.887 12:59:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.887 12:59:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.887 12:59:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.887 12:59:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.887 12:59:11 -- pm/common@21 -- # date +%s 00:02:29.887 12:59:11 -- pm/common@21 -- # date +%s 00:02:29.887 12:59:11 -- pm/common@25 -- # sleep 1 00:02:29.887 12:59:11 -- pm/common@21 -- # date +%s 00:02:29.887 12:59:11 -- pm/common@21 -- # date +%s 00:02:29.887 12:59:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730894351 00:02:29.887 12:59:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730894351 00:02:29.887 12:59:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730894351 00:02:29.887 12:59:11 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730894351 00:02:29.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730894351_collect-cpu-load.pm.log 00:02:29.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730894351_collect-vmstat.pm.log 00:02:29.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730894351_collect-cpu-temp.pm.log 00:02:29.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730894351_collect-bmc-pm.bmc.pm.log 00:02:30.909 12:59:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:30.909 12:59:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:30.909 12:59:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:30.909 12:59:12 -- common/autotest_common.sh@10 -- # set +x 00:02:30.909 12:59:12 -- spdk/autotest.sh@59 -- # create_test_list 00:02:30.909 12:59:12 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:30.909 12:59:12 -- common/autotest_common.sh@10 -- # set +x 00:02:30.909 12:59:12 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:30.909 12:59:12 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.909 12:59:12 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.909 12:59:12 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:30.909 12:59:12 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.909 12:59:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:30.909 12:59:12 -- common/autotest_common.sh@1455 -- # uname 00:02:30.909 12:59:12 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:30.909 12:59:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:30.909 12:59:12 -- common/autotest_common.sh@1475 -- # uname 00:02:30.909 12:59:12 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:30.909 12:59:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:30.909 12:59:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:30.909 lcov: LCOV version 1.15 00:02:30.909 12:59:12 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:57.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:57.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:01.735 12:59:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:01.735 12:59:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:01.735 12:59:42 -- common/autotest_common.sh@10 -- # set +x 00:03:01.735 12:59:42 -- spdk/autotest.sh@78 -- # rm -f 00:03:01.735 12:59:42 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.041 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:05.041 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.041 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:05.301 12:59:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:05.301 12:59:47 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:05.301 12:59:47 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:05.301 12:59:47 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:05.301 12:59:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:05.301 12:59:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:05.301 12:59:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:05.301 12:59:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.301 12:59:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:05.301 12:59:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:05.301 12:59:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:05.301 12:59:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:05.301 12:59:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:05.301 12:59:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:05.301 12:59:47 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:05.301 No valid GPT data, bailing 00:03:05.301 12:59:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:05.301 12:59:47 -- scripts/common.sh@394 -- # pt= 00:03:05.301 12:59:47 -- scripts/common.sh@395 -- # return 1 00:03:05.301 12:59:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:05.301 1+0 records in 00:03:05.301 1+0 records out 00:03:05.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00147119 s, 713 MB/s 00:03:05.301 12:59:47 -- spdk/autotest.sh@105 -- # sync 00:03:05.562 12:59:47 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:05.562 12:59:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:05.562 12:59:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:15.564 12:59:55 -- spdk/autotest.sh@111 -- # uname -s 00:03:15.564 12:59:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:15.564 12:59:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:15.564 12:59:55 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:17.476 Hugepages 00:03:17.476 node hugesize free / total 00:03:17.476 node0 1048576kB 0 / 0 00:03:17.476 node0 2048kB 0 / 0 00:03:17.476 node1 1048576kB 0 / 0 00:03:17.476 node1 2048kB 0 / 0 00:03:17.476 00:03:17.476 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:17.477 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:17.477 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:17.477 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:17.477 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:17.477 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:17.477 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:17.477 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:17.477 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:17.737 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:17.737 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:17.737 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:17.737 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:17.737 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:17.737 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:17.737 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:17.737 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:17.737 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:17.737 12:59:59 -- spdk/autotest.sh@117 -- # uname -s 00:03:17.737 12:59:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:17.737 12:59:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:17.737 12:59:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.945 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:21.945 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:23.332 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:23.593 13:00:05 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:24.534 13:00:06 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:24.534 13:00:06 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:24.534 13:00:06 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:24.534 13:00:06 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:24.534 13:00:06 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:24.534 13:00:06 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:24.534 13:00:06 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:24.534 13:00:06 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:24.534 13:00:06 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:24.534 13:00:06 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:24.534 13:00:06 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:24.534 13:00:06 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.740 Waiting for block devices as requested 00:03:28.740 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:28.740 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:28.740 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:28.740 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:28.740 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:28.741 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:28.741 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:28.741 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:28.741 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:29.002 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:29.002 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:29.263 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:29.263 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:29.263 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:29.263 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:29.524 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:29.524 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:29.785 13:00:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:29.785 13:00:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:29.785 13:00:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:29.785 13:00:11 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:29.785 13:00:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:29.785 13:00:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:29.785 13:00:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:29.785 13:00:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:29.785 13:00:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:29.785 13:00:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:29.785 13:00:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:29.785 13:00:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:29.785 13:00:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:29.785 13:00:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:29.785 13:00:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:29.785 13:00:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:29.785 13:00:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:29.785 13:00:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:29.785 13:00:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:29.785 13:00:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:29.785 13:00:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:29.785 13:00:11 -- common/autotest_common.sh@1541 -- # continue 00:03:29.785 13:00:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:29.785 13:00:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:29.785 13:00:11 -- common/autotest_common.sh@10 -- # set +x 00:03:30.047 13:00:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:30.047 13:00:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:30.047 13:00:11 -- common/autotest_common.sh@10 -- # set +x 00:03:30.047 13:00:11 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.349 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:33.349 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:33.349 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:33.349 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:33.611 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:34.185 13:00:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:34.185 13:00:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:34.185 13:00:15 -- common/autotest_common.sh@10 -- # set +x 00:03:34.185 13:00:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:34.185 13:00:15 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:34.185 13:00:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:34.185 13:00:15 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:34.185 13:00:15 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:34.185 13:00:15 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:34.185 13:00:15 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:34.185 13:00:15 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:34.185 13:00:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:34.185 13:00:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:34.185 13:00:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.185 13:00:15 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.185 13:00:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:34.185 13:00:15 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:34.185 13:00:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:34.185 13:00:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:34.185 13:00:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:34.185 13:00:15 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:34.185 13:00:15 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:34.185 13:00:15 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:34.185 13:00:15 -- common/autotest_common.sh@1570 -- # return 0 00:03:34.185 13:00:15 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:34.185 13:00:15 -- common/autotest_common.sh@1578 -- # return 0 00:03:34.185 13:00:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:34.185 13:00:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:34.185 13:00:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:34.185 13:00:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:34.185 13:00:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:34.185 13:00:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:34.185 13:00:15 -- common/autotest_common.sh@10 -- # set +x 00:03:34.185 13:00:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:34.185 13:00:15 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:34.185 13:00:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:34.185 13:00:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:34.185 13:00:15 -- common/autotest_common.sh@10 -- # set +x 00:03:34.185 ************************************ 00:03:34.185 START TEST env 00:03:34.185 ************************************ 00:03:34.185 13:00:16 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:34.446 * Looking for test storage... 00:03:34.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:34.446 13:00:16 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:34.446 13:00:16 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:34.446 13:00:16 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:34.446 13:00:16 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:34.446 13:00:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.446 13:00:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.446 13:00:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.446 13:00:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.446 13:00:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.446 13:00:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.446 13:00:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.446 13:00:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.446 13:00:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.446 13:00:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.446 13:00:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.446 13:00:16 env -- scripts/common.sh@344 -- # case "$op" in 00:03:34.446 13:00:16 env -- scripts/common.sh@345 -- # : 1 00:03:34.446 13:00:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.446 13:00:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.446 13:00:16 env -- scripts/common.sh@365 -- # decimal 1 00:03:34.446 13:00:16 env -- scripts/common.sh@353 -- # local d=1 00:03:34.446 13:00:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.446 13:00:16 env -- scripts/common.sh@355 -- # echo 1 00:03:34.446 13:00:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.446 13:00:16 env -- scripts/common.sh@366 -- # decimal 2 00:03:34.446 13:00:16 env -- scripts/common.sh@353 -- # local d=2 00:03:34.446 13:00:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.446 13:00:16 env -- scripts/common.sh@355 -- # echo 2 00:03:34.446 13:00:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.446 13:00:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.446 13:00:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.446 13:00:16 env -- scripts/common.sh@368 -- # return 0 00:03:34.446 13:00:16 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.446 13:00:16 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:34.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.446 --rc genhtml_branch_coverage=1 00:03:34.446 --rc genhtml_function_coverage=1 00:03:34.446 --rc genhtml_legend=1 00:03:34.446 --rc geninfo_all_blocks=1 00:03:34.446 --rc geninfo_unexecuted_blocks=1 00:03:34.446 00:03:34.446 ' 00:03:34.446 13:00:16 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:34.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.446 --rc genhtml_branch_coverage=1 00:03:34.446 --rc genhtml_function_coverage=1 00:03:34.446 --rc genhtml_legend=1 00:03:34.446 --rc geninfo_all_blocks=1 00:03:34.446 --rc geninfo_unexecuted_blocks=1 00:03:34.446 00:03:34.446 ' 00:03:34.446 13:00:16 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:34.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.446 --rc genhtml_branch_coverage=1 00:03:34.447 --rc genhtml_function_coverage=1 00:03:34.447 --rc genhtml_legend=1 00:03:34.447 --rc geninfo_all_blocks=1 00:03:34.447 --rc geninfo_unexecuted_blocks=1 00:03:34.447 00:03:34.447 ' 00:03:34.447 13:00:16 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:34.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.447 --rc genhtml_branch_coverage=1 00:03:34.447 --rc genhtml_function_coverage=1 00:03:34.447 --rc genhtml_legend=1 00:03:34.447 --rc geninfo_all_blocks=1 00:03:34.447 --rc geninfo_unexecuted_blocks=1 00:03:34.447 00:03:34.447 ' 00:03:34.447 13:00:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:34.447 13:00:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:34.447 13:00:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:34.447 13:00:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.447 ************************************ 00:03:34.447 START TEST env_memory 00:03:34.447 ************************************ 00:03:34.447 13:00:16 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:34.447 00:03:34.447 00:03:34.447 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.447 http://cunit.sourceforge.net/ 00:03:34.447 00:03:34.447 00:03:34.447 Suite: memory 00:03:34.447 Test: alloc and free memory map ...[2024-11-06 13:00:16.316246] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:34.447 passed 00:03:34.447 Test: mem map translation ...[2024-11-06 13:00:16.341799] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:34.447 [2024-11-06 13:00:16.341827] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:34.447 [2024-11-06 13:00:16.341875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:34.447 [2024-11-06 13:00:16.341882] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:34.709 passed 00:03:34.709 Test: mem map registration ...[2024-11-06 13:00:16.397076] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:34.709 [2024-11-06 13:00:16.397105] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:34.709 passed 00:03:34.709 Test: mem map adjacent registrations ...passed 00:03:34.709 00:03:34.709 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.709 suites 1 1 n/a 0 0 00:03:34.709 tests 4 4 4 0 0 00:03:34.709 asserts 152 152 152 0 n/a 00:03:34.709 00:03:34.709 Elapsed time = 0.193 seconds 00:03:34.709 00:03:34.709 real 0m0.208s 00:03:34.709 user 0m0.194s 00:03:34.709 sys 0m0.013s 00:03:34.709 13:00:16 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:34.709 13:00:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:34.709 ************************************ 00:03:34.709 END TEST env_memory 00:03:34.709 ************************************ 00:03:34.709 13:00:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.709 13:00:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:34.709 13:00:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:34.709 13:00:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.709 ************************************ 00:03:34.709 START TEST env_vtophys 00:03:34.709 ************************************ 00:03:34.709 13:00:16 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.709 EAL: lib.eal log level changed from notice to debug 00:03:34.709 EAL: Detected lcore 0 as core 0 on socket 0 00:03:34.709 EAL: Detected lcore 1 as core 1 on socket 0 00:03:34.709 EAL: Detected lcore 2 as core 2 on socket 0 00:03:34.709 EAL: Detected lcore 3 as core 3 on socket 0 00:03:34.709 EAL: Detected lcore 4 as core 4 on socket 0 00:03:34.709 EAL: Detected lcore 5 as core 5 on socket 0 00:03:34.709 EAL: Detected lcore 6 as core 6 on socket 0 00:03:34.709 EAL: Detected lcore 7 as core 7 on socket 0 00:03:34.709 EAL: Detected lcore 8 as core 8 on socket 0 00:03:34.709 EAL: Detected lcore 9 as core 9 on socket 0 00:03:34.709 EAL: Detected lcore 10 as core 10 on socket 0 00:03:34.709 EAL: Detected lcore 11 as core 11 on socket 0 00:03:34.709 EAL: Detected lcore 12 as core 12 on socket 0 00:03:34.709 EAL: Detected lcore 13 as core 13 on socket 0 00:03:34.709 EAL: Detected lcore 14 as core 14 on socket 0 00:03:34.709 EAL: Detected lcore 15 as core 15 on socket 0 00:03:34.709 EAL: Detected lcore 16 as core 16 on socket 0 00:03:34.709 EAL: Detected lcore 17 as core 17 on socket 0 00:03:34.709 EAL: Detected lcore 18 as core 18 on socket 0 00:03:34.709 EAL: Detected lcore 19 as core 19 on socket 0 00:03:34.709 EAL: Detected lcore 20 as core 20 on socket 0 00:03:34.709 EAL: Detected lcore 21 as core 21 on socket 0 00:03:34.709 EAL: Detected lcore 22 as core 22 on socket 0 00:03:34.709 EAL: Detected lcore 23 as core 23 on socket 0 00:03:34.709 EAL: Detected lcore 24 as core 24 on socket 0 00:03:34.709 EAL: Detected lcore 25 as core 25 on socket 0 00:03:34.709 EAL: Detected lcore 26 as core 26 on socket 0 00:03:34.709 EAL: Detected lcore 27 as core 27 on socket 0 00:03:34.709 EAL: Detected lcore 28 as core 28 on socket 0 00:03:34.709 EAL: Detected lcore 29 as core 29 on socket 0 00:03:34.709 EAL: Detected lcore 30 as core 30 on socket 0 00:03:34.709 EAL: Detected lcore 31 as core 31 on socket 0 00:03:34.709 EAL: Detected lcore 32 as core 32 on socket 0 00:03:34.709 EAL: Detected lcore 33 as core 33 on socket 0 00:03:34.709 EAL: Detected lcore 34 as core 34 on socket 0 00:03:34.709 EAL: Detected lcore 35 as core 35 on socket 0 00:03:34.709 EAL: Detected lcore 36 as core 0 on socket 1 00:03:34.709 EAL: Detected lcore 37 as core 1 on socket 1 00:03:34.709 EAL: Detected lcore 38 as core 2 on socket 1 00:03:34.709 EAL: Detected lcore 39 as core 3 on socket 1 00:03:34.709 EAL: Detected lcore 40 as core 4 on socket 1 00:03:34.709 EAL: Detected lcore 41 as core 5 on socket 1 00:03:34.709 EAL: Detected lcore 42 as core 6 on socket 1 00:03:34.709 EAL: Detected lcore 43 as core 7 on socket 1 00:03:34.709 EAL: Detected lcore 44 as core 8 on socket 1 00:03:34.709 EAL: Detected lcore 45 as core 9 on socket 1 00:03:34.709 EAL: Detected lcore 46 as core 10 on socket 1 00:03:34.709 EAL: Detected lcore 47 as core 11 on socket 1 00:03:34.709 EAL: Detected lcore 48 as core 12 on socket 1 00:03:34.709 EAL: Detected lcore 49 as core 13 on socket 1 00:03:34.709 EAL: Detected lcore 50 as core 14 on socket 1 00:03:34.709 EAL: Detected lcore 51 as core 15 on socket 1 00:03:34.709 EAL: Detected lcore 52 as core 16 on socket 1 00:03:34.709 EAL: Detected lcore 53 as core 17 on socket 1 00:03:34.709 EAL: Detected lcore 54 as core 18 on socket 1 00:03:34.709 EAL: Detected lcore 55 as core 19 on socket 1 00:03:34.709 EAL: Detected lcore 56 as core 20 on socket 1 00:03:34.709 EAL: Detected lcore 57 as core 21 on socket 1 00:03:34.709 EAL: Detected lcore 58 as core 22 on socket 1 00:03:34.709 EAL: Detected lcore 59 as core 23 on socket 1 00:03:34.709 EAL: Detected lcore 60 as core 24 on socket 1 00:03:34.709 EAL: Detected lcore 61 as core 25 on socket 1 00:03:34.709 EAL: Detected lcore 62 as core 26 on socket 1 00:03:34.709 EAL: Detected lcore 63 as core 27 on socket 1 00:03:34.709 EAL: Detected lcore 64 as core 28 on socket 1 00:03:34.709 EAL: Detected lcore 65 as core 29 on socket 1 00:03:34.709 EAL: Detected lcore 66 as core 30 on socket 1 00:03:34.709 EAL: Detected lcore 67 as core 31 on socket 1 00:03:34.709 EAL: Detected lcore 68 as core 32 on socket 1 00:03:34.709 EAL: Detected lcore 69 as core 33 on socket 1 00:03:34.709 EAL: Detected lcore 70 as core 34 on socket 1 00:03:34.709 EAL: Detected lcore 71 as core 35 on socket 1 00:03:34.709 EAL: Detected lcore 72 as core 0 on socket 0 00:03:34.709 EAL: Detected lcore 73 as core 1 on socket 0 00:03:34.709 EAL: Detected lcore 74 as core 2 on socket 0 00:03:34.709 EAL: Detected lcore 75 as core 3 on socket 0 00:03:34.709 EAL: Detected lcore 76 as core 4 on socket 0 00:03:34.709 EAL: Detected lcore 77 as core 5 on socket 0 00:03:34.709 EAL: Detected lcore 78 as core 6 on socket 0 00:03:34.709 EAL: Detected lcore 79 as core 7 on socket 0 00:03:34.709 EAL: Detected lcore 80 as core 8 on socket 0 00:03:34.709 EAL: Detected lcore 81 as core 9 on socket 0 00:03:34.709 EAL: Detected lcore 82 as core 10 on socket 0 00:03:34.709 EAL: Detected lcore 83 as core 11 on socket 0 00:03:34.709 EAL: Detected lcore 84 as core 12 on socket 0 00:03:34.709 EAL: Detected lcore 85 as core 13 on socket 0 00:03:34.709 EAL: Detected lcore 86 as core 14 on socket 0 00:03:34.709 EAL: Detected lcore 87 as core 15 on socket 0 00:03:34.709 EAL: Detected lcore 88 as core 16 on socket 0 00:03:34.709 EAL: Detected lcore 89 as core 17 on socket 0 00:03:34.709 EAL: Detected lcore 90 as core 18 on socket 0 00:03:34.709 EAL: Detected lcore 91 as core 19 on socket 0 00:03:34.709 EAL: Detected lcore 92 as core 20 on socket 0 00:03:34.709 EAL: Detected lcore 93 as core 21 on socket 0 00:03:34.710 EAL: Detected lcore 94 as core 22 on socket 0 00:03:34.710 EAL: Detected lcore 95 as core 23 on socket 0 00:03:34.710 EAL: Detected lcore 96 as core 24 on socket 0 00:03:34.710 EAL: Detected lcore 97 as core 25 on socket 0 00:03:34.710 EAL: Detected lcore 98 as core 26 on socket 0 00:03:34.710 EAL: Detected lcore 99 as core 27 on socket 0 00:03:34.710 EAL: Detected lcore 100 as core 28 on socket 0 00:03:34.710 EAL: Detected lcore 101 as core 29 on socket 0 00:03:34.710 EAL: Detected lcore 102 as core 30 on socket 0 00:03:34.710 EAL: Detected lcore 103 as core 31 on socket 0 00:03:34.710 EAL: Detected lcore 104 as core 32 on socket 0 00:03:34.710 EAL: Detected lcore 105 as core 33 on socket 0 00:03:34.710 EAL: Detected lcore 106 as core 34 on socket 0 00:03:34.710 EAL: Detected lcore 107 as core 35 on socket 0 00:03:34.710 EAL: Detected lcore 108 as core 0 on socket 1 00:03:34.710 EAL: Detected lcore 109 as core 1 on socket 1 00:03:34.710 EAL: Detected lcore 110 as core 2 on socket 1 00:03:34.710 EAL: Detected lcore 111 as core 3 on socket 1 00:03:34.710 EAL: Detected lcore 112 as core 4 on socket 1 00:03:34.710 EAL: Detected lcore 113 as core 5 on socket 1 00:03:34.710 EAL: Detected lcore 114 as core 6 on socket 1 00:03:34.710 EAL: Detected lcore 115 as core 7 on socket 1 00:03:34.710 EAL: Detected lcore 116 as core 8 on socket 1 00:03:34.710 EAL: Detected lcore 117 as core 9 on socket 1 00:03:34.710 EAL: Detected lcore 118 as core 10 on socket 1 00:03:34.710 EAL: Detected lcore 119 as core 11 on socket 1 00:03:34.710 EAL: Detected lcore 120 as core 12 on socket 1 00:03:34.710 EAL: Detected lcore 121 as core 13 on socket 1 00:03:34.710 EAL: Detected lcore 122 as core 14 on socket 1 00:03:34.710 EAL: Detected lcore 123 as core 15 on socket 1 00:03:34.710 EAL: Detected lcore 124 as core 16 on socket 1 00:03:34.710 EAL: Detected lcore 125 as core 17 on socket 1 00:03:34.710 EAL: Detected lcore 126 as core 18 on socket 1 00:03:34.710 EAL: Detected lcore 127 as core 19 on socket 1 00:03:34.710 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:34.710 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:34.710 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:34.710 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:34.710 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:34.710 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:34.710 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:34.710 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:34.710 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:34.710 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:34.710 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:34.710 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:34.710 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:34.710 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:34.710 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:34.710 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:34.710 EAL: Maximum logical cores by configuration: 128 00:03:34.710 EAL: Detected CPU lcores: 128 00:03:34.710 EAL: Detected NUMA nodes: 2 00:03:34.710 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:34.710 EAL: Detected shared linkage of DPDK 00:03:34.710 EAL: No shared files mode enabled, IPC will be disabled 00:03:34.973 EAL: Bus pci wants IOVA as 'DC' 00:03:34.973 EAL: Buses did not request a specific IOVA mode. 00:03:34.973 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:34.973 EAL: Selected IOVA mode 'VA' 00:03:34.973 EAL: Probing VFIO support... 00:03:34.973 EAL: IOMMU type 1 (Type 1) is supported 00:03:34.973 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:34.973 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:34.973 EAL: VFIO support initialized 00:03:34.973 EAL: Ask a virtual area of 0x2e000 bytes 00:03:34.973 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:34.973 EAL: Setting up physically contiguous memory... 00:03:34.973 EAL: Setting maximum number of open files to 524288 00:03:34.973 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:34.973 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:34.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:34.973 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.973 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:34.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.973 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.973 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:34.973 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:34.973 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.973 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:34.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.973 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.973 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:34.973 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:34.973 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.973 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:34.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.973 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.973 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:34.973 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:34.973 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.973 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:34.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.973 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.973 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:34.973 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:34.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:34.973 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.973 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:34.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.973 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.973 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:34.973 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:34.973 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.973 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:34.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.973 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.973 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:34.973 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:34.973 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.973 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:34.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.973 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.973 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:34.973 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:34.973 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.973 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:34.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.973 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.973 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:34.973 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:34.973 EAL: Hugepages will be freed exactly as allocated. 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: TSC frequency is ~2400000 KHz 00:03:34.973 EAL: Main lcore 0 is ready (tid=7fc49b7baa00;cpuset=[0]) 00:03:34.973 EAL: Trying to obtain current memory policy. 00:03:34.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.973 EAL: Restoring previous memory policy: 0 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was expanded by 2MB 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:34.973 EAL: Mem event callback 'spdk:(nil)' registered 00:03:34.973 00:03:34.973 00:03:34.973 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.973 http://cunit.sourceforge.net/ 00:03:34.973 00:03:34.973 00:03:34.973 Suite: components_suite 00:03:34.973 Test: vtophys_malloc_test ...passed 00:03:34.973 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:34.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.973 EAL: Restoring previous memory policy: 4 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was expanded by 4MB 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was shrunk by 4MB 00:03:34.973 EAL: Trying to obtain current memory policy. 00:03:34.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.973 EAL: Restoring previous memory policy: 4 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was expanded by 6MB 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was shrunk by 6MB 00:03:34.973 EAL: Trying to obtain current memory policy. 00:03:34.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.973 EAL: Restoring previous memory policy: 4 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was expanded by 10MB 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was shrunk by 10MB 00:03:34.973 EAL: Trying to obtain current memory policy. 00:03:34.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.973 EAL: Restoring previous memory policy: 4 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was expanded by 18MB 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was shrunk by 18MB 00:03:34.973 EAL: Trying to obtain current memory policy. 00:03:34.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.973 EAL: Restoring previous memory policy: 4 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was expanded by 34MB 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.973 EAL: Heap on socket 0 was shrunk by 34MB 00:03:34.973 EAL: Trying to obtain current memory policy. 00:03:34.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.973 EAL: Restoring previous memory policy: 4 00:03:34.973 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.973 EAL: request: mp_malloc_sync 00:03:34.973 EAL: No shared files mode enabled, IPC is disabled 00:03:34.974 EAL: Heap on socket 0 was expanded by 66MB 00:03:34.974 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.974 EAL: request: mp_malloc_sync 00:03:34.974 EAL: No shared files mode enabled, IPC is disabled 00:03:34.974 EAL: Heap on socket 0 was shrunk by 66MB 00:03:34.974 EAL: Trying to obtain current memory policy. 00:03:34.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.974 EAL: Restoring previous memory policy: 4 00:03:34.974 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.974 EAL: request: mp_malloc_sync 00:03:34.974 EAL: No shared files mode enabled, IPC is disabled 00:03:34.974 EAL: Heap on socket 0 was expanded by 130MB 00:03:34.974 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.974 EAL: request: mp_malloc_sync 00:03:34.974 EAL: No shared files mode enabled, IPC is disabled 00:03:34.974 EAL: Heap on socket 0 was shrunk by 130MB 00:03:34.974 EAL: Trying to obtain current memory policy. 00:03:34.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.974 EAL: Restoring previous memory policy: 4 00:03:34.974 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.974 EAL: request: mp_malloc_sync 00:03:34.974 EAL: No shared files mode enabled, IPC is disabled 00:03:34.974 EAL: Heap on socket 0 was expanded by 258MB 00:03:34.974 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.974 EAL: request: mp_malloc_sync 00:03:34.974 EAL: No shared files mode enabled, IPC is disabled 00:03:34.974 EAL: Heap on socket 0 was shrunk by 258MB 00:03:34.974 EAL: Trying to obtain current memory policy. 00:03:34.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.235 EAL: Restoring previous memory policy: 4 00:03:35.235 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.235 EAL: request: mp_malloc_sync 00:03:35.235 EAL: No shared files mode enabled, IPC is disabled 00:03:35.235 EAL: Heap on socket 0 was expanded by 514MB 00:03:35.235 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.235 EAL: request: mp_malloc_sync 00:03:35.235 EAL: No shared files mode enabled, IPC is disabled 00:03:35.235 EAL: Heap on socket 0 was shrunk by 514MB 00:03:35.235 EAL: Trying to obtain current memory policy. 00:03:35.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.496 EAL: Restoring previous memory policy: 4 00:03:35.496 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.496 EAL: request: mp_malloc_sync 00:03:35.496 EAL: No shared files mode enabled, IPC is disabled 00:03:35.496 EAL: Heap on socket 0 was expanded by 1026MB 00:03:35.496 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.496 EAL: request: mp_malloc_sync 00:03:35.496 EAL: No shared files mode enabled, IPC is disabled 00:03:35.496 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:35.496 passed 00:03:35.496 00:03:35.496 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.496 suites 1 1 n/a 0 0 00:03:35.496 tests 2 2 2 0 0 00:03:35.496 asserts 497 497 497 0 n/a 00:03:35.496 00:03:35.496 Elapsed time = 0.688 seconds 00:03:35.496 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.496 EAL: request: mp_malloc_sync 00:03:35.496 EAL: No shared files mode enabled, IPC is disabled 00:03:35.496 EAL: Heap on socket 0 was shrunk by 2MB 00:03:35.496 EAL: No shared files mode enabled, IPC is disabled 00:03:35.496 EAL: No shared files mode enabled, IPC is disabled 00:03:35.496 EAL: No shared files mode enabled, IPC is disabled 00:03:35.496 00:03:35.496 real 0m0.838s 00:03:35.496 user 0m0.437s 00:03:35.496 sys 0m0.377s 00:03:35.496 13:00:17 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:35.496 13:00:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:35.496 ************************************ 00:03:35.496 END TEST env_vtophys 00:03:35.496 ************************************ 00:03:35.757 13:00:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.757 13:00:17 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:35.757 13:00:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.757 13:00:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.757 ************************************ 00:03:35.757 START TEST env_pci 00:03:35.757 ************************************ 00:03:35.757 13:00:17 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.757 00:03:35.757 00:03:35.757 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.757 http://cunit.sourceforge.net/ 00:03:35.757 00:03:35.757 00:03:35.757 Suite: pci 00:03:35.757 Test: pci_hook ...[2024-11-06 13:00:17.491330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1480912 has claimed it 00:03:35.757 EAL: Cannot find device (10000:00:01.0) 00:03:35.757 EAL: Failed to attach device on primary process 00:03:35.757 passed 00:03:35.757 00:03:35.757 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.757 suites 1 1 n/a 0 0 00:03:35.757 tests 1 1 1 0 0 00:03:35.757 asserts 25 25 25 0 n/a 00:03:35.757 00:03:35.757 Elapsed time = 0.031 seconds 00:03:35.757 00:03:35.757 real 0m0.054s 00:03:35.757 user 0m0.018s 00:03:35.757 sys 0m0.035s 00:03:35.757 13:00:17 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:35.757 13:00:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:35.757 ************************************ 00:03:35.757 END TEST env_pci 00:03:35.757 ************************************ 00:03:35.757 13:00:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:35.757 13:00:17 env -- env/env.sh@15 -- # uname 00:03:35.757 13:00:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:35.757 13:00:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:35.757 13:00:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.757 13:00:17 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:35.757 13:00:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:35.757 13:00:17 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.757 ************************************ 00:03:35.757 START TEST env_dpdk_post_init 00:03:35.757 ************************************ 00:03:35.757 13:00:17 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.757 EAL: Detected CPU lcores: 128 00:03:35.757 EAL: Detected NUMA nodes: 2 00:03:35.757 EAL: Detected shared linkage of DPDK 00:03:35.757 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:36.019 EAL: Selected IOVA mode 'VA' 00:03:36.019 EAL: VFIO support initialized 00:03:36.019 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:36.019 EAL: Using IOMMU type 1 (Type 1) 00:03:36.019 EAL: Ignore mapping IO port bar(1) 00:03:36.280 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:36.280 EAL: Ignore mapping IO port bar(1) 00:03:36.541 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:36.541 EAL: Ignore mapping IO port bar(1) 00:03:36.541 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:36.802 EAL: Ignore mapping IO port bar(1) 00:03:36.802 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:37.062 EAL: Ignore mapping IO port bar(1) 00:03:37.062 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:37.324 EAL: Ignore mapping IO port bar(1) 00:03:37.324 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:37.604 EAL: Ignore mapping IO port bar(1) 00:03:37.604 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:37.604 EAL: Ignore mapping IO port bar(1) 00:03:37.864 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:38.124 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:38.124 EAL: Ignore mapping IO port bar(1) 00:03:38.124 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:38.384 EAL: Ignore mapping IO port bar(1) 00:03:38.384 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:38.645 EAL: Ignore mapping IO port bar(1) 00:03:38.645 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:38.905 EAL: Ignore mapping IO port bar(1) 00:03:38.905 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:38.905 EAL: Ignore mapping IO port bar(1) 00:03:39.165 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:39.165 EAL: Ignore mapping IO port bar(1) 00:03:39.426 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:39.426 EAL: Ignore mapping IO port bar(1) 00:03:39.686 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:39.686 EAL: Ignore mapping IO port bar(1) 00:03:39.686 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:39.686 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:39.686 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:39.947 Starting DPDK initialization... 00:03:39.947 Starting SPDK post initialization... 00:03:39.947 SPDK NVMe probe 00:03:39.947 Attaching to 0000:65:00.0 00:03:39.947 Attached to 0000:65:00.0 00:03:39.947 Cleaning up... 00:03:41.860 00:03:41.860 real 0m5.748s 00:03:41.860 user 0m0.103s 00:03:41.860 sys 0m0.203s 00:03:41.860 13:00:23 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:41.860 13:00:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.860 ************************************ 00:03:41.861 END TEST env_dpdk_post_init 00:03:41.861 ************************************ 00:03:41.861 13:00:23 env -- env/env.sh@26 -- # uname 00:03:41.861 13:00:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:41.861 13:00:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:41.861 13:00:23 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:41.861 13:00:23 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.861 13:00:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.861 ************************************ 00:03:41.861 START TEST env_mem_callbacks 00:03:41.861 ************************************ 00:03:41.861 13:00:23 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:41.861 EAL: Detected CPU lcores: 128 00:03:41.861 EAL: Detected NUMA nodes: 2 00:03:41.861 EAL: Detected shared linkage of DPDK 00:03:41.861 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:41.861 EAL: Selected IOVA mode 'VA' 00:03:41.861 EAL: VFIO support initialized 00:03:41.861 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:41.861 00:03:41.861 00:03:41.861 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.861 http://cunit.sourceforge.net/ 00:03:41.861 00:03:41.861 00:03:41.861 Suite: memory 00:03:41.861 Test: test ... 00:03:41.861 register 0x200000200000 2097152 00:03:41.861 malloc 3145728 00:03:41.861 register 0x200000400000 4194304 00:03:41.861 buf 0x200000500000 len 3145728 PASSED 00:03:41.861 malloc 64 00:03:41.861 buf 0x2000004fff40 len 64 PASSED 00:03:41.861 malloc 4194304 00:03:41.861 register 0x200000800000 6291456 00:03:41.861 buf 0x200000a00000 len 4194304 PASSED 00:03:41.861 free 0x200000500000 3145728 00:03:41.861 free 0x2000004fff40 64 00:03:41.861 unregister 0x200000400000 4194304 PASSED 00:03:41.861 free 0x200000a00000 4194304 00:03:41.861 unregister 0x200000800000 6291456 PASSED 00:03:41.861 malloc 8388608 00:03:41.861 register 0x200000400000 10485760 00:03:41.861 buf 0x200000600000 len 8388608 PASSED 00:03:41.861 free 0x200000600000 8388608 00:03:41.861 unregister 0x200000400000 10485760 PASSED 00:03:41.861 passed 00:03:41.861 00:03:41.861 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.861 suites 1 1 n/a 0 0 00:03:41.861 tests 1 1 1 0 0 00:03:41.861 asserts 15 15 15 0 n/a 00:03:41.861 00:03:41.861 Elapsed time = 0.010 seconds 00:03:41.861 00:03:41.861 real 0m0.069s 00:03:41.861 user 0m0.017s 00:03:41.861 sys 0m0.052s 00:03:41.861 13:00:23 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:41.861 13:00:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:41.861 ************************************ 00:03:41.861 END TEST env_mem_callbacks 00:03:41.861 ************************************ 00:03:41.861 00:03:41.861 real 0m7.539s 00:03:41.861 user 0m1.044s 00:03:41.861 sys 0m1.063s 00:03:41.861 13:00:23 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:41.861 13:00:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.861 ************************************ 00:03:41.861 END TEST env 00:03:41.861 ************************************ 00:03:41.861 13:00:23 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.861 13:00:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:41.861 13:00:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:41.861 13:00:23 -- common/autotest_common.sh@10 -- # set +x 00:03:41.861 ************************************ 00:03:41.861 START TEST rpc 00:03:41.861 ************************************ 00:03:41.861 13:00:23 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.861 * Looking for test storage... 00:03:41.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.861 13:00:23 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:41.861 13:00:23 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:41.861 13:00:23 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:42.122 13:00:23 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:42.123 13:00:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.123 13:00:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.123 13:00:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.123 13:00:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.123 13:00:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.123 13:00:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.123 13:00:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.123 13:00:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.123 13:00:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.123 13:00:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.123 13:00:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.123 13:00:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:42.123 13:00:23 rpc -- scripts/common.sh@345 -- # : 1 00:03:42.123 13:00:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.123 13:00:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.123 13:00:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:42.123 13:00:23 rpc -- scripts/common.sh@353 -- # local d=1 00:03:42.123 13:00:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.123 13:00:23 rpc -- scripts/common.sh@355 -- # echo 1 00:03:42.123 13:00:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.123 13:00:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:42.123 13:00:23 rpc -- scripts/common.sh@353 -- # local d=2 00:03:42.123 13:00:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.123 13:00:23 rpc -- scripts/common.sh@355 -- # echo 2 00:03:42.123 13:00:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.123 13:00:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.123 13:00:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.123 13:00:23 rpc -- scripts/common.sh@368 -- # return 0 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.123 --rc genhtml_branch_coverage=1 00:03:42.123 --rc genhtml_function_coverage=1 00:03:42.123 --rc genhtml_legend=1 00:03:42.123 --rc geninfo_all_blocks=1 00:03:42.123 --rc geninfo_unexecuted_blocks=1 00:03:42.123 00:03:42.123 ' 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.123 --rc genhtml_branch_coverage=1 00:03:42.123 --rc genhtml_function_coverage=1 00:03:42.123 --rc genhtml_legend=1 00:03:42.123 --rc geninfo_all_blocks=1 00:03:42.123 --rc geninfo_unexecuted_blocks=1 00:03:42.123 00:03:42.123 ' 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.123 --rc genhtml_branch_coverage=1 00:03:42.123 --rc genhtml_function_coverage=1 00:03:42.123 --rc genhtml_legend=1 00:03:42.123 --rc geninfo_all_blocks=1 00:03:42.123 --rc geninfo_unexecuted_blocks=1 00:03:42.123 00:03:42.123 ' 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.123 --rc genhtml_branch_coverage=1 00:03:42.123 --rc genhtml_function_coverage=1 00:03:42.123 --rc genhtml_legend=1 00:03:42.123 --rc geninfo_all_blocks=1 00:03:42.123 --rc geninfo_unexecuted_blocks=1 00:03:42.123 00:03:42.123 ' 00:03:42.123 13:00:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1482194 00:03:42.123 13:00:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.123 13:00:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1482194 00:03:42.123 13:00:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@833 -- # '[' -z 1482194 ']' 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:42.123 13:00:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.123 [2024-11-06 13:00:23.908371] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:03:42.123 [2024-11-06 13:00:23.908437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482194 ] 00:03:42.123 [2024-11-06 13:00:24.001403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.384 [2024-11-06 13:00:24.053597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:42.384 [2024-11-06 13:00:24.053650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1482194' to capture a snapshot of events at runtime. 00:03:42.384 [2024-11-06 13:00:24.053659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:42.384 [2024-11-06 13:00:24.053666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:42.384 [2024-11-06 13:00:24.053672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1482194 for offline analysis/debug. 00:03:42.384 [2024-11-06 13:00:24.054475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.956 13:00:24 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:42.956 13:00:24 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:42.956 13:00:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.956 13:00:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.956 13:00:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:42.956 13:00:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:42.956 13:00:24 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:42.956 13:00:24 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.956 13:00:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.956 ************************************ 00:03:42.956 START TEST rpc_integrity 00:03:42.956 ************************************ 00:03:42.956 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:42.956 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:42.956 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.956 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.956 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.956 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:42.956 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:42.956 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:42.956 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:42.956 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.956 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.956 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.956 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:42.956 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:42.956 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.956 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.218 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.218 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:43.218 { 00:03:43.218 "name": "Malloc0", 00:03:43.218 "aliases": [ 00:03:43.218 "e987aefd-40ae-4e4a-8646-084bd4b1cd24" 00:03:43.218 ], 00:03:43.218 "product_name": "Malloc disk", 00:03:43.218 "block_size": 512, 00:03:43.218 "num_blocks": 16384, 00:03:43.218 "uuid": "e987aefd-40ae-4e4a-8646-084bd4b1cd24", 00:03:43.218 "assigned_rate_limits": { 00:03:43.218 "rw_ios_per_sec": 0, 00:03:43.218 "rw_mbytes_per_sec": 0, 00:03:43.218 "r_mbytes_per_sec": 0, 00:03:43.218 "w_mbytes_per_sec": 0 00:03:43.218 }, 00:03:43.218 "claimed": false, 00:03:43.218 "zoned": false, 00:03:43.218 "supported_io_types": { 00:03:43.218 "read": true, 00:03:43.218 "write": true, 00:03:43.218 "unmap": true, 00:03:43.218 "flush": true, 00:03:43.218 "reset": true, 00:03:43.218 "nvme_admin": false, 00:03:43.218 "nvme_io": false, 00:03:43.218 "nvme_io_md": false, 00:03:43.218 "write_zeroes": true, 00:03:43.218 "zcopy": true, 00:03:43.218 "get_zone_info": false, 00:03:43.218 "zone_management": false, 00:03:43.218 "zone_append": false, 00:03:43.218 "compare": false, 00:03:43.218 "compare_and_write": false, 00:03:43.218 "abort": true, 00:03:43.218 "seek_hole": false, 00:03:43.218 "seek_data": false, 00:03:43.218 "copy": true, 00:03:43.218 "nvme_iov_md": false 00:03:43.218 }, 00:03:43.218 "memory_domains": [ 00:03:43.218 { 00:03:43.218 "dma_device_id": "system", 00:03:43.218 "dma_device_type": 1 00:03:43.218 }, 00:03:43.218 { 00:03:43.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.218 "dma_device_type": 2 00:03:43.218 } 00:03:43.218 ], 00:03:43.218 "driver_specific": {} 00:03:43.218 } 00:03:43.218 ]' 00:03:43.218 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:43.218 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:43.218 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:43.218 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.218 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.218 [2024-11-06 13:00:24.926939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:43.218 [2024-11-06 13:00:24.926984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:43.218 [2024-11-06 13:00:24.927000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xab1800 00:03:43.218 [2024-11-06 13:00:24.927008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:43.218 [2024-11-06 13:00:24.928523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:43.218 [2024-11-06 13:00:24.928558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:43.218 Passthru0 00:03:43.218 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.218 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:43.218 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.218 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.218 13:00:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.218 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:43.218 { 00:03:43.218 "name": "Malloc0", 00:03:43.218 "aliases": [ 00:03:43.218 "e987aefd-40ae-4e4a-8646-084bd4b1cd24" 00:03:43.218 ], 00:03:43.218 "product_name": "Malloc disk", 00:03:43.218 "block_size": 512, 00:03:43.218 "num_blocks": 16384, 00:03:43.218 "uuid": "e987aefd-40ae-4e4a-8646-084bd4b1cd24", 00:03:43.218 "assigned_rate_limits": { 00:03:43.218 "rw_ios_per_sec": 0, 00:03:43.218 "rw_mbytes_per_sec": 0, 00:03:43.218 "r_mbytes_per_sec": 0, 00:03:43.218 "w_mbytes_per_sec": 0 00:03:43.218 }, 00:03:43.218 "claimed": true, 00:03:43.218 "claim_type": "exclusive_write", 00:03:43.218 "zoned": false, 00:03:43.218 "supported_io_types": { 00:03:43.218 "read": true, 00:03:43.218 "write": true, 00:03:43.218 "unmap": true, 00:03:43.218 "flush": true, 00:03:43.218 "reset": true, 00:03:43.218 "nvme_admin": false, 00:03:43.218 "nvme_io": false, 00:03:43.218 "nvme_io_md": false, 00:03:43.218 "write_zeroes": true, 00:03:43.218 "zcopy": true, 00:03:43.218 "get_zone_info": false, 00:03:43.218 "zone_management": false, 00:03:43.218 "zone_append": false, 00:03:43.218 "compare": false, 00:03:43.218 "compare_and_write": false, 00:03:43.218 "abort": true, 00:03:43.218 "seek_hole": false, 00:03:43.218 "seek_data": false, 00:03:43.218 "copy": true, 00:03:43.218 "nvme_iov_md": false 00:03:43.218 }, 00:03:43.218 "memory_domains": [ 00:03:43.218 { 00:03:43.218 "dma_device_id": "system", 00:03:43.218 "dma_device_type": 1 00:03:43.218 }, 00:03:43.218 { 00:03:43.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.218 "dma_device_type": 2 00:03:43.218 } 00:03:43.218 ], 00:03:43.218 "driver_specific": {} 00:03:43.218 }, 00:03:43.218 { 00:03:43.218 "name": "Passthru0", 00:03:43.218 "aliases": [ 00:03:43.218 "4bb780b0-4816-5132-a969-43ba328f26ca" 00:03:43.218 ], 00:03:43.218 "product_name": "passthru", 00:03:43.218 "block_size": 512, 00:03:43.218 "num_blocks": 16384, 00:03:43.218 "uuid": "4bb780b0-4816-5132-a969-43ba328f26ca", 00:03:43.218 "assigned_rate_limits": { 00:03:43.218 "rw_ios_per_sec": 0, 00:03:43.218 "rw_mbytes_per_sec": 0, 00:03:43.218 "r_mbytes_per_sec": 0, 00:03:43.218 "w_mbytes_per_sec": 0 00:03:43.218 }, 00:03:43.218 "claimed": false, 00:03:43.218 "zoned": false, 00:03:43.218 "supported_io_types": { 00:03:43.218 "read": true, 00:03:43.218 "write": true, 00:03:43.218 "unmap": true, 00:03:43.218 "flush": true, 00:03:43.218 "reset": true, 00:03:43.218 "nvme_admin": false, 00:03:43.218 "nvme_io": false, 00:03:43.218 "nvme_io_md": false, 00:03:43.218 "write_zeroes": true, 00:03:43.218 "zcopy": true, 00:03:43.218 "get_zone_info": false, 00:03:43.218 "zone_management": false, 00:03:43.218 "zone_append": false, 00:03:43.218 "compare": false, 00:03:43.218 "compare_and_write": false, 00:03:43.218 "abort": true, 00:03:43.218 "seek_hole": false, 00:03:43.218 "seek_data": false, 00:03:43.218 "copy": true, 00:03:43.218 "nvme_iov_md": false 00:03:43.218 }, 00:03:43.218 "memory_domains": [ 00:03:43.218 { 00:03:43.218 "dma_device_id": "system", 00:03:43.218 "dma_device_type": 1 00:03:43.218 }, 00:03:43.218 { 00:03:43.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.218 "dma_device_type": 2 00:03:43.218 } 00:03:43.218 ], 00:03:43.218 "driver_specific": { 00:03:43.218 "passthru": { 00:03:43.218 "name": "Passthru0", 00:03:43.218 "base_bdev_name": "Malloc0" 00:03:43.218 } 00:03:43.218 } 00:03:43.218 } 00:03:43.218 ]' 00:03:43.218 13:00:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:43.218 13:00:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:43.218 13:00:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.218 13:00:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.218 13:00:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.218 13:00:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:43.218 13:00:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:43.218 13:00:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:43.218 00:03:43.218 real 0m0.303s 00:03:43.218 user 0m0.187s 00:03:43.218 sys 0m0.044s 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:43.218 13:00:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.219 ************************************ 00:03:43.219 END TEST rpc_integrity 00:03:43.219 ************************************ 00:03:43.480 13:00:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:43.480 13:00:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:43.480 13:00:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:43.480 13:00:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.480 ************************************ 00:03:43.480 START TEST rpc_plugins 00:03:43.480 ************************************ 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:43.480 { 00:03:43.480 "name": "Malloc1", 00:03:43.480 "aliases": [ 00:03:43.480 "aa95c6d0-8795-4910-893e-49a2bb546b95" 00:03:43.480 ], 00:03:43.480 "product_name": "Malloc disk", 00:03:43.480 "block_size": 4096, 00:03:43.480 "num_blocks": 256, 00:03:43.480 "uuid": "aa95c6d0-8795-4910-893e-49a2bb546b95", 00:03:43.480 "assigned_rate_limits": { 00:03:43.480 "rw_ios_per_sec": 0, 00:03:43.480 "rw_mbytes_per_sec": 0, 00:03:43.480 "r_mbytes_per_sec": 0, 00:03:43.480 "w_mbytes_per_sec": 0 00:03:43.480 }, 00:03:43.480 "claimed": false, 00:03:43.480 "zoned": false, 00:03:43.480 "supported_io_types": { 00:03:43.480 "read": true, 00:03:43.480 "write": true, 00:03:43.480 "unmap": true, 00:03:43.480 "flush": true, 00:03:43.480 "reset": true, 00:03:43.480 "nvme_admin": false, 00:03:43.480 "nvme_io": false, 00:03:43.480 "nvme_io_md": false, 00:03:43.480 "write_zeroes": true, 00:03:43.480 "zcopy": true, 00:03:43.480 "get_zone_info": false, 00:03:43.480 "zone_management": false, 00:03:43.480 "zone_append": false, 00:03:43.480 "compare": false, 00:03:43.480 "compare_and_write": false, 00:03:43.480 "abort": true, 00:03:43.480 "seek_hole": false, 00:03:43.480 "seek_data": false, 00:03:43.480 "copy": true, 00:03:43.480 "nvme_iov_md": false 00:03:43.480 }, 00:03:43.480 "memory_domains": [ 00:03:43.480 { 00:03:43.480 "dma_device_id": "system", 00:03:43.480 "dma_device_type": 1 00:03:43.480 }, 00:03:43.480 { 00:03:43.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.480 "dma_device_type": 2 00:03:43.480 } 00:03:43.480 ], 00:03:43.480 "driver_specific": {} 00:03:43.480 } 00:03:43.480 ]' 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:43.480 13:00:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:43.480 00:03:43.480 real 0m0.156s 00:03:43.480 user 0m0.091s 00:03:43.480 sys 0m0.025s 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:43.480 13:00:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:43.480 ************************************ 00:03:43.480 END TEST rpc_plugins 00:03:43.480 ************************************ 00:03:43.480 13:00:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:43.480 13:00:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:43.480 13:00:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:43.480 13:00:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.741 ************************************ 00:03:43.741 START TEST rpc_trace_cmd_test 00:03:43.741 ************************************ 00:03:43.741 13:00:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:43.741 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:43.741 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:43.741 13:00:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.741 13:00:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:43.741 13:00:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.741 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:43.741 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1482194", 00:03:43.741 "tpoint_group_mask": "0x8", 00:03:43.741 "iscsi_conn": { 00:03:43.741 "mask": "0x2", 00:03:43.741 "tpoint_mask": "0x0" 00:03:43.741 }, 00:03:43.741 "scsi": { 00:03:43.741 "mask": "0x4", 00:03:43.741 "tpoint_mask": "0x0" 00:03:43.741 }, 00:03:43.741 "bdev": { 00:03:43.741 "mask": "0x8", 00:03:43.741 "tpoint_mask": "0xffffffffffffffff" 00:03:43.741 }, 00:03:43.741 "nvmf_rdma": { 00:03:43.741 "mask": "0x10", 00:03:43.741 "tpoint_mask": "0x0" 00:03:43.741 }, 00:03:43.741 "nvmf_tcp": { 00:03:43.741 "mask": "0x20", 00:03:43.741 "tpoint_mask": "0x0" 00:03:43.741 }, 00:03:43.741 "ftl": { 00:03:43.741 "mask": "0x40", 00:03:43.741 "tpoint_mask": "0x0" 00:03:43.741 }, 00:03:43.741 "blobfs": { 00:03:43.741 "mask": "0x80", 00:03:43.741 "tpoint_mask": "0x0" 00:03:43.741 }, 00:03:43.741 "dsa": { 00:03:43.741 "mask": "0x200", 00:03:43.741 "tpoint_mask": "0x0" 00:03:43.741 }, 00:03:43.741 "thread": { 00:03:43.741 "mask": "0x400", 00:03:43.741 "tpoint_mask": "0x0" 00:03:43.741 }, 00:03:43.741 "nvme_pcie": { 00:03:43.742 "mask": "0x800", 00:03:43.742 "tpoint_mask": "0x0" 00:03:43.742 }, 00:03:43.742 "iaa": { 00:03:43.742 "mask": "0x1000", 00:03:43.742 "tpoint_mask": "0x0" 00:03:43.742 }, 00:03:43.742 "nvme_tcp": { 00:03:43.742 "mask": "0x2000", 00:03:43.742 "tpoint_mask": "0x0" 00:03:43.742 }, 00:03:43.742 "bdev_nvme": { 00:03:43.742 "mask": "0x4000", 00:03:43.742 "tpoint_mask": "0x0" 00:03:43.742 }, 00:03:43.742 "sock": { 00:03:43.742 "mask": "0x8000", 00:03:43.742 "tpoint_mask": "0x0" 00:03:43.742 }, 00:03:43.742 "blob": { 00:03:43.742 "mask": "0x10000", 00:03:43.742 "tpoint_mask": "0x0" 00:03:43.742 }, 00:03:43.742 "bdev_raid": { 00:03:43.742 "mask": "0x20000", 00:03:43.742 "tpoint_mask": "0x0" 00:03:43.742 }, 00:03:43.742 "scheduler": { 00:03:43.742 "mask": "0x40000", 00:03:43.742 "tpoint_mask": "0x0" 00:03:43.742 } 00:03:43.742 }' 00:03:43.742 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:43.742 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:43.742 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:43.742 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:43.742 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:43.742 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:43.742 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:43.742 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:43.742 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:44.004 13:00:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:44.004 00:03:44.004 real 0m0.253s 00:03:44.004 user 0m0.214s 00:03:44.004 sys 0m0.028s 00:03:44.004 13:00:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.004 13:00:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:44.004 ************************************ 00:03:44.004 END TEST rpc_trace_cmd_test 00:03:44.004 ************************************ 00:03:44.004 13:00:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:44.004 13:00:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:44.004 13:00:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:44.004 13:00:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.004 13:00:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.004 13:00:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.004 ************************************ 00:03:44.004 START TEST rpc_daemon_integrity 00:03:44.004 ************************************ 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:44.004 { 00:03:44.004 "name": "Malloc2", 00:03:44.004 "aliases": [ 00:03:44.004 "5e636588-0073-45e0-a438-cd19e9d67137" 00:03:44.004 ], 00:03:44.004 "product_name": "Malloc disk", 00:03:44.004 "block_size": 512, 00:03:44.004 "num_blocks": 16384, 00:03:44.004 "uuid": "5e636588-0073-45e0-a438-cd19e9d67137", 00:03:44.004 "assigned_rate_limits": { 00:03:44.004 "rw_ios_per_sec": 0, 00:03:44.004 "rw_mbytes_per_sec": 0, 00:03:44.004 "r_mbytes_per_sec": 0, 00:03:44.004 "w_mbytes_per_sec": 0 00:03:44.004 }, 00:03:44.004 "claimed": false, 00:03:44.004 "zoned": false, 00:03:44.004 "supported_io_types": { 00:03:44.004 "read": true, 00:03:44.004 "write": true, 00:03:44.004 "unmap": true, 00:03:44.004 "flush": true, 00:03:44.004 "reset": true, 00:03:44.004 "nvme_admin": false, 00:03:44.004 "nvme_io": false, 00:03:44.004 "nvme_io_md": false, 00:03:44.004 "write_zeroes": true, 00:03:44.004 "zcopy": true, 00:03:44.004 "get_zone_info": false, 00:03:44.004 "zone_management": false, 00:03:44.004 "zone_append": false, 00:03:44.004 "compare": false, 00:03:44.004 "compare_and_write": false, 00:03:44.004 "abort": true, 00:03:44.004 "seek_hole": false, 00:03:44.004 "seek_data": false, 00:03:44.004 "copy": true, 00:03:44.004 "nvme_iov_md": false 00:03:44.004 }, 00:03:44.004 "memory_domains": [ 00:03:44.004 { 00:03:44.004 "dma_device_id": "system", 00:03:44.004 "dma_device_type": 1 00:03:44.004 }, 00:03:44.004 { 00:03:44.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:44.004 "dma_device_type": 2 00:03:44.004 } 00:03:44.004 ], 00:03:44.004 "driver_specific": {} 00:03:44.004 } 00:03:44.004 ]' 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.004 [2024-11-06 13:00:25.893575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:44.004 [2024-11-06 13:00:25.893618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:44.004 [2024-11-06 13:00:25.893636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9fe550 00:03:44.004 [2024-11-06 13:00:25.893643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:44.004 [2024-11-06 13:00:25.895173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:44.004 [2024-11-06 13:00:25.895208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:44.004 Passthru0 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.004 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.266 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.266 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:44.266 { 00:03:44.266 "name": "Malloc2", 00:03:44.266 "aliases": [ 00:03:44.266 "5e636588-0073-45e0-a438-cd19e9d67137" 00:03:44.266 ], 00:03:44.266 "product_name": "Malloc disk", 00:03:44.266 "block_size": 512, 00:03:44.266 "num_blocks": 16384, 00:03:44.266 "uuid": "5e636588-0073-45e0-a438-cd19e9d67137", 00:03:44.266 "assigned_rate_limits": { 00:03:44.266 "rw_ios_per_sec": 0, 00:03:44.266 "rw_mbytes_per_sec": 0, 00:03:44.266 "r_mbytes_per_sec": 0, 00:03:44.266 "w_mbytes_per_sec": 0 00:03:44.266 }, 00:03:44.266 "claimed": true, 00:03:44.266 "claim_type": "exclusive_write", 00:03:44.266 "zoned": false, 00:03:44.266 "supported_io_types": { 00:03:44.266 "read": true, 00:03:44.266 "write": true, 00:03:44.266 "unmap": true, 00:03:44.266 "flush": true, 00:03:44.266 "reset": true, 00:03:44.266 "nvme_admin": false, 00:03:44.266 "nvme_io": false, 00:03:44.266 "nvme_io_md": false, 00:03:44.266 "write_zeroes": true, 00:03:44.266 "zcopy": true, 00:03:44.266 "get_zone_info": false, 00:03:44.266 "zone_management": false, 00:03:44.266 "zone_append": false, 00:03:44.266 "compare": false, 00:03:44.266 "compare_and_write": false, 00:03:44.266 "abort": true, 00:03:44.266 "seek_hole": false, 00:03:44.266 "seek_data": false, 00:03:44.266 "copy": true, 00:03:44.266 "nvme_iov_md": false 00:03:44.266 }, 00:03:44.266 "memory_domains": [ 00:03:44.266 { 00:03:44.266 "dma_device_id": "system", 00:03:44.266 "dma_device_type": 1 00:03:44.266 }, 00:03:44.266 { 00:03:44.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:44.266 "dma_device_type": 2 00:03:44.266 } 00:03:44.266 ], 00:03:44.266 "driver_specific": {} 00:03:44.266 }, 00:03:44.266 { 00:03:44.266 "name": "Passthru0", 00:03:44.267 "aliases": [ 00:03:44.267 "f319ffc9-cbdd-57f5-8294-119d0053b440" 00:03:44.267 ], 00:03:44.267 "product_name": "passthru", 00:03:44.267 "block_size": 512, 00:03:44.267 "num_blocks": 16384, 00:03:44.267 "uuid": "f319ffc9-cbdd-57f5-8294-119d0053b440", 00:03:44.267 "assigned_rate_limits": { 00:03:44.267 "rw_ios_per_sec": 0, 00:03:44.267 "rw_mbytes_per_sec": 0, 00:03:44.267 "r_mbytes_per_sec": 0, 00:03:44.267 "w_mbytes_per_sec": 0 00:03:44.267 }, 00:03:44.267 "claimed": false, 00:03:44.267 "zoned": false, 00:03:44.267 "supported_io_types": { 00:03:44.267 "read": true, 00:03:44.267 "write": true, 00:03:44.267 "unmap": true, 00:03:44.267 "flush": true, 00:03:44.267 "reset": true, 00:03:44.267 "nvme_admin": false, 00:03:44.267 "nvme_io": false, 00:03:44.267 "nvme_io_md": false, 00:03:44.267 "write_zeroes": true, 00:03:44.267 "zcopy": true, 00:03:44.267 "get_zone_info": false, 00:03:44.267 "zone_management": false, 00:03:44.267 "zone_append": false, 00:03:44.267 "compare": false, 00:03:44.267 "compare_and_write": false, 00:03:44.267 "abort": true, 00:03:44.267 "seek_hole": false, 00:03:44.267 "seek_data": false, 00:03:44.267 "copy": true, 00:03:44.267 "nvme_iov_md": false 00:03:44.267 }, 00:03:44.267 "memory_domains": [ 00:03:44.267 { 00:03:44.267 "dma_device_id": "system", 00:03:44.267 "dma_device_type": 1 00:03:44.267 }, 00:03:44.267 { 00:03:44.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:44.267 "dma_device_type": 2 00:03:44.267 } 00:03:44.267 ], 00:03:44.267 "driver_specific": { 00:03:44.267 "passthru": { 00:03:44.267 "name": "Passthru0", 00:03:44.267 "base_bdev_name": "Malloc2" 00:03:44.267 } 00:03:44.267 } 00:03:44.267 } 00:03:44.267 ]' 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:44.267 13:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.267 13:00:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:44.267 13:00:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:44.267 13:00:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:44.267 13:00:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:44.267 00:03:44.267 real 0m0.309s 00:03:44.267 user 0m0.190s 00:03:44.267 sys 0m0.048s 00:03:44.267 13:00:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.267 13:00:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:44.267 ************************************ 00:03:44.267 END TEST rpc_daemon_integrity 00:03:44.267 ************************************ 00:03:44.267 13:00:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:44.267 13:00:26 rpc -- rpc/rpc.sh@84 -- # killprocess 1482194 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@952 -- # '[' -z 1482194 ']' 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@956 -- # kill -0 1482194 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@957 -- # uname 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1482194 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1482194' 00:03:44.267 killing process with pid 1482194 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@971 -- # kill 1482194 00:03:44.267 13:00:26 rpc -- common/autotest_common.sh@976 -- # wait 1482194 00:03:44.528 00:03:44.528 real 0m2.756s 00:03:44.528 user 0m3.515s 00:03:44.528 sys 0m0.849s 00:03:44.528 13:00:26 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.528 13:00:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.528 ************************************ 00:03:44.528 END TEST rpc 00:03:44.528 ************************************ 00:03:44.789 13:00:26 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:44.789 13:00:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.789 13:00:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.789 13:00:26 -- common/autotest_common.sh@10 -- # set +x 00:03:44.789 ************************************ 00:03:44.789 START TEST skip_rpc 00:03:44.789 ************************************ 00:03:44.789 13:00:26 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:44.789 * Looking for test storage... 00:03:44.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:44.789 13:00:26 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:44.789 13:00:26 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:44.789 13:00:26 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:44.789 13:00:26 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.789 13:00:26 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:44.789 13:00:26 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.789 13:00:26 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:44.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.789 --rc genhtml_branch_coverage=1 00:03:44.789 --rc genhtml_function_coverage=1 00:03:44.789 --rc genhtml_legend=1 00:03:44.790 --rc geninfo_all_blocks=1 00:03:44.790 --rc geninfo_unexecuted_blocks=1 00:03:44.790 00:03:44.790 ' 00:03:44.790 13:00:26 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:44.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.790 --rc genhtml_branch_coverage=1 00:03:44.790 --rc genhtml_function_coverage=1 00:03:44.790 --rc genhtml_legend=1 00:03:44.790 --rc geninfo_all_blocks=1 00:03:44.790 --rc geninfo_unexecuted_blocks=1 00:03:44.790 00:03:44.790 ' 00:03:44.790 13:00:26 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:44.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.790 --rc genhtml_branch_coverage=1 00:03:44.790 --rc genhtml_function_coverage=1 00:03:44.790 --rc genhtml_legend=1 00:03:44.790 --rc geninfo_all_blocks=1 00:03:44.790 --rc geninfo_unexecuted_blocks=1 00:03:44.790 00:03:44.790 ' 00:03:44.790 13:00:26 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:44.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.790 --rc genhtml_branch_coverage=1 00:03:44.790 --rc genhtml_function_coverage=1 00:03:44.790 --rc genhtml_legend=1 00:03:44.790 --rc geninfo_all_blocks=1 00:03:44.790 --rc geninfo_unexecuted_blocks=1 00:03:44.790 00:03:44.790 ' 00:03:44.790 13:00:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:44.790 13:00:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:44.790 13:00:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:44.790 13:00:26 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.790 13:00:26 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.790 13:00:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.051 ************************************ 00:03:45.051 START TEST skip_rpc 00:03:45.051 ************************************ 00:03:45.051 13:00:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:45.051 13:00:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1483041 00:03:45.051 13:00:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.051 13:00:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:45.051 13:00:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:45.051 [2024-11-06 13:00:26.766869] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:03:45.051 [2024-11-06 13:00:26.766937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483041 ] 00:03:45.051 [2024-11-06 13:00:26.861301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.051 [2024-11-06 13:00:26.913576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.339 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1483041 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 1483041 ']' 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 1483041 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1483041 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1483041' 00:03:50.340 killing process with pid 1483041 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 1483041 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 1483041 00:03:50.340 00:03:50.340 real 0m5.264s 00:03:50.340 user 0m5.010s 00:03:50.340 sys 0m0.301s 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.340 13:00:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.340 ************************************ 00:03:50.340 END TEST skip_rpc 00:03:50.340 ************************************ 00:03:50.340 13:00:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:50.340 13:00:32 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.340 13:00:32 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.340 13:00:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.340 ************************************ 00:03:50.340 START TEST skip_rpc_with_json 00:03:50.340 ************************************ 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1484080 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1484080 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 1484080 ']' 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:50.340 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:50.340 [2024-11-06 13:00:32.117299] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:03:50.340 [2024-11-06 13:00:32.117351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484080 ] 00:03:50.340 [2024-11-06 13:00:32.203591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.340 [2024-11-06 13:00:32.236663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.394 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:51.394 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:51.395 [2024-11-06 13:00:32.903882] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:51.395 request: 00:03:51.395 { 00:03:51.395 "trtype": "tcp", 00:03:51.395 "method": "nvmf_get_transports", 00:03:51.395 "req_id": 1 00:03:51.395 } 00:03:51.395 Got JSON-RPC error response 00:03:51.395 response: 00:03:51.395 { 00:03:51.395 "code": -19, 00:03:51.395 "message": "No such device" 00:03:51.395 } 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:51.395 [2024-11-06 13:00:32.915979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.395 13:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:51.395 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.395 13:00:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:51.395 { 00:03:51.395 "subsystems": [ 00:03:51.395 { 00:03:51.395 "subsystem": "fsdev", 00:03:51.395 "config": [ 00:03:51.395 { 00:03:51.395 "method": "fsdev_set_opts", 00:03:51.395 "params": { 00:03:51.395 "fsdev_io_pool_size": 65535, 00:03:51.395 "fsdev_io_cache_size": 256 00:03:51.395 } 00:03:51.395 } 00:03:51.395 ] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "vfio_user_target", 00:03:51.395 "config": null 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "keyring", 00:03:51.395 "config": [] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "iobuf", 00:03:51.395 "config": [ 00:03:51.395 { 00:03:51.395 "method": "iobuf_set_options", 00:03:51.395 "params": { 00:03:51.395 "small_pool_count": 8192, 00:03:51.395 "large_pool_count": 1024, 00:03:51.395 "small_bufsize": 8192, 00:03:51.395 "large_bufsize": 135168, 00:03:51.395 "enable_numa": false 00:03:51.395 } 00:03:51.395 } 00:03:51.395 ] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "sock", 00:03:51.395 "config": [ 00:03:51.395 { 00:03:51.395 "method": "sock_set_default_impl", 00:03:51.395 "params": { 00:03:51.395 "impl_name": "posix" 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "method": "sock_impl_set_options", 00:03:51.395 "params": { 00:03:51.395 "impl_name": "ssl", 00:03:51.395 "recv_buf_size": 4096, 00:03:51.395 "send_buf_size": 4096, 00:03:51.395 "enable_recv_pipe": true, 00:03:51.395 "enable_quickack": false, 00:03:51.395 "enable_placement_id": 0, 00:03:51.395 "enable_zerocopy_send_server": true, 00:03:51.395 "enable_zerocopy_send_client": false, 00:03:51.395 "zerocopy_threshold": 0, 00:03:51.395 "tls_version": 0, 00:03:51.395 "enable_ktls": false 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "method": "sock_impl_set_options", 00:03:51.395 "params": { 00:03:51.395 "impl_name": "posix", 00:03:51.395 "recv_buf_size": 2097152, 00:03:51.395 "send_buf_size": 2097152, 00:03:51.395 "enable_recv_pipe": true, 00:03:51.395 "enable_quickack": false, 00:03:51.395 "enable_placement_id": 0, 00:03:51.395 "enable_zerocopy_send_server": true, 00:03:51.395 "enable_zerocopy_send_client": false, 00:03:51.395 "zerocopy_threshold": 0, 00:03:51.395 "tls_version": 0, 00:03:51.395 "enable_ktls": false 00:03:51.395 } 00:03:51.395 } 00:03:51.395 ] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "vmd", 00:03:51.395 "config": [] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "accel", 00:03:51.395 "config": [ 00:03:51.395 { 00:03:51.395 "method": "accel_set_options", 00:03:51.395 "params": { 00:03:51.395 "small_cache_size": 128, 00:03:51.395 "large_cache_size": 16, 00:03:51.395 "task_count": 2048, 00:03:51.395 "sequence_count": 2048, 00:03:51.395 "buf_count": 2048 00:03:51.395 } 00:03:51.395 } 00:03:51.395 ] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "bdev", 00:03:51.395 "config": [ 00:03:51.395 { 00:03:51.395 "method": "bdev_set_options", 00:03:51.395 "params": { 00:03:51.395 "bdev_io_pool_size": 65535, 00:03:51.395 "bdev_io_cache_size": 256, 00:03:51.395 "bdev_auto_examine": true, 00:03:51.395 "iobuf_small_cache_size": 128, 00:03:51.395 "iobuf_large_cache_size": 16 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "method": "bdev_raid_set_options", 00:03:51.395 "params": { 00:03:51.395 "process_window_size_kb": 1024, 00:03:51.395 "process_max_bandwidth_mb_sec": 0 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "method": "bdev_iscsi_set_options", 00:03:51.395 "params": { 00:03:51.395 "timeout_sec": 30 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "method": "bdev_nvme_set_options", 00:03:51.395 "params": { 00:03:51.395 "action_on_timeout": "none", 00:03:51.395 "timeout_us": 0, 00:03:51.395 "timeout_admin_us": 0, 00:03:51.395 "keep_alive_timeout_ms": 10000, 00:03:51.395 "arbitration_burst": 0, 00:03:51.395 "low_priority_weight": 0, 00:03:51.395 "medium_priority_weight": 0, 00:03:51.395 "high_priority_weight": 0, 00:03:51.395 "nvme_adminq_poll_period_us": 10000, 00:03:51.395 "nvme_ioq_poll_period_us": 0, 00:03:51.395 "io_queue_requests": 0, 00:03:51.395 "delay_cmd_submit": true, 00:03:51.395 "transport_retry_count": 4, 00:03:51.395 "bdev_retry_count": 3, 00:03:51.395 "transport_ack_timeout": 0, 00:03:51.395 "ctrlr_loss_timeout_sec": 0, 00:03:51.395 "reconnect_delay_sec": 0, 00:03:51.395 "fast_io_fail_timeout_sec": 0, 00:03:51.395 "disable_auto_failback": false, 00:03:51.395 "generate_uuids": false, 00:03:51.395 "transport_tos": 0, 00:03:51.395 "nvme_error_stat": false, 00:03:51.395 "rdma_srq_size": 0, 00:03:51.395 "io_path_stat": false, 00:03:51.395 "allow_accel_sequence": false, 00:03:51.395 "rdma_max_cq_size": 0, 00:03:51.395 "rdma_cm_event_timeout_ms": 0, 00:03:51.395 "dhchap_digests": [ 00:03:51.395 "sha256", 00:03:51.395 "sha384", 00:03:51.395 "sha512" 00:03:51.395 ], 00:03:51.395 "dhchap_dhgroups": [ 00:03:51.395 "null", 00:03:51.395 "ffdhe2048", 00:03:51.395 "ffdhe3072", 00:03:51.395 "ffdhe4096", 00:03:51.395 "ffdhe6144", 00:03:51.395 "ffdhe8192" 00:03:51.395 ] 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "method": "bdev_nvme_set_hotplug", 00:03:51.395 "params": { 00:03:51.395 "period_us": 100000, 00:03:51.395 "enable": false 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "method": "bdev_wait_for_examine" 00:03:51.395 } 00:03:51.395 ] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "scsi", 00:03:51.395 "config": null 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "scheduler", 00:03:51.395 "config": [ 00:03:51.395 { 00:03:51.395 "method": "framework_set_scheduler", 00:03:51.395 "params": { 00:03:51.395 "name": "static" 00:03:51.395 } 00:03:51.395 } 00:03:51.395 ] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "vhost_scsi", 00:03:51.395 "config": [] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "vhost_blk", 00:03:51.395 "config": [] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "ublk", 00:03:51.395 "config": [] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "nbd", 00:03:51.395 "config": [] 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "subsystem": "nvmf", 00:03:51.395 "config": [ 00:03:51.395 { 00:03:51.395 "method": "nvmf_set_config", 00:03:51.395 "params": { 00:03:51.395 "discovery_filter": "match_any", 00:03:51.395 "admin_cmd_passthru": { 00:03:51.395 "identify_ctrlr": false 00:03:51.395 }, 00:03:51.395 "dhchap_digests": [ 00:03:51.395 "sha256", 00:03:51.395 "sha384", 00:03:51.395 "sha512" 00:03:51.395 ], 00:03:51.395 "dhchap_dhgroups": [ 00:03:51.395 "null", 00:03:51.395 "ffdhe2048", 00:03:51.395 "ffdhe3072", 00:03:51.395 "ffdhe4096", 00:03:51.395 "ffdhe6144", 00:03:51.395 "ffdhe8192" 00:03:51.395 ] 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "method": "nvmf_set_max_subsystems", 00:03:51.395 "params": { 00:03:51.395 "max_subsystems": 1024 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.395 "method": "nvmf_set_crdt", 00:03:51.395 "params": { 00:03:51.395 "crdt1": 0, 00:03:51.395 "crdt2": 0, 00:03:51.395 "crdt3": 0 00:03:51.395 } 00:03:51.395 }, 00:03:51.395 { 00:03:51.396 "method": "nvmf_create_transport", 00:03:51.396 "params": { 00:03:51.396 "trtype": "TCP", 00:03:51.396 "max_queue_depth": 128, 00:03:51.396 "max_io_qpairs_per_ctrlr": 127, 00:03:51.396 "in_capsule_data_size": 4096, 00:03:51.396 "max_io_size": 131072, 00:03:51.396 "io_unit_size": 131072, 00:03:51.396 "max_aq_depth": 128, 00:03:51.396 "num_shared_buffers": 511, 00:03:51.396 "buf_cache_size": 4294967295, 00:03:51.396 "dif_insert_or_strip": false, 00:03:51.396 "zcopy": false, 00:03:51.396 "c2h_success": true, 00:03:51.396 "sock_priority": 0, 00:03:51.396 "abort_timeout_sec": 1, 00:03:51.396 "ack_timeout": 0, 00:03:51.396 "data_wr_pool_size": 0 00:03:51.396 } 00:03:51.396 } 00:03:51.396 ] 00:03:51.396 }, 00:03:51.396 { 00:03:51.396 "subsystem": "iscsi", 00:03:51.396 "config": [ 00:03:51.396 { 00:03:51.396 "method": "iscsi_set_options", 00:03:51.396 "params": { 00:03:51.396 "node_base": "iqn.2016-06.io.spdk", 00:03:51.396 "max_sessions": 128, 00:03:51.396 "max_connections_per_session": 2, 00:03:51.396 "max_queue_depth": 64, 00:03:51.396 "default_time2wait": 2, 00:03:51.396 "default_time2retain": 20, 00:03:51.396 "first_burst_length": 8192, 00:03:51.396 "immediate_data": true, 00:03:51.396 "allow_duplicated_isid": false, 00:03:51.396 "error_recovery_level": 0, 00:03:51.396 "nop_timeout": 60, 00:03:51.396 "nop_in_interval": 30, 00:03:51.396 "disable_chap": false, 00:03:51.396 "require_chap": false, 00:03:51.396 "mutual_chap": false, 00:03:51.396 "chap_group": 0, 00:03:51.396 "max_large_datain_per_connection": 64, 00:03:51.396 "max_r2t_per_connection": 4, 00:03:51.396 "pdu_pool_size": 36864, 00:03:51.396 "immediate_data_pool_size": 16384, 00:03:51.396 "data_out_pool_size": 2048 00:03:51.396 } 00:03:51.396 } 00:03:51.396 ] 00:03:51.396 } 00:03:51.396 ] 00:03:51.396 } 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1484080 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1484080 ']' 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1484080 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1484080 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1484080' 00:03:51.396 killing process with pid 1484080 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1484080 00:03:51.396 13:00:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1484080 00:03:51.656 13:00:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1484429 00:03:51.656 13:00:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:51.656 13:00:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1484429 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1484429 ']' 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1484429 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1484429 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1484429' 00:03:56.940 killing process with pid 1484429 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1484429 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1484429 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.940 00:03:56.940 real 0m6.543s 00:03:56.940 user 0m6.452s 00:03:56.940 sys 0m0.557s 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.940 ************************************ 00:03:56.940 END TEST skip_rpc_with_json 00:03:56.940 ************************************ 00:03:56.940 13:00:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:56.940 13:00:38 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.940 13:00:38 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.940 13:00:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.940 ************************************ 00:03:56.940 START TEST skip_rpc_with_delay 00:03:56.940 ************************************ 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.940 [2024-11-06 13:00:38.740589] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:56.940 00:03:56.940 real 0m0.079s 00:03:56.940 user 0m0.048s 00:03:56.940 sys 0m0.030s 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:56.940 13:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:56.940 ************************************ 00:03:56.940 END TEST skip_rpc_with_delay 00:03:56.940 ************************************ 00:03:56.940 13:00:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:56.940 13:00:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:56.940 13:00:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:56.940 13:00:38 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:56.940 13:00:38 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:56.940 13:00:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.940 ************************************ 00:03:56.940 START TEST exit_on_failed_rpc_init 00:03:56.940 ************************************ 00:03:56.940 13:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:03:57.201 13:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1485500 00:03:57.201 13:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1485500 00:03:57.201 13:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:57.201 13:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 1485500 ']' 00:03:57.201 13:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.201 13:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:57.201 13:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.201 13:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:57.201 13:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:57.201 [2024-11-06 13:00:38.913280] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:03:57.201 [2024-11-06 13:00:38.913341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485500 ] 00:03:57.201 [2024-11-06 13:00:38.999686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.201 [2024-11-06 13:00:39.034358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:58.142 [2024-11-06 13:00:39.752763] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:03:58.142 [2024-11-06 13:00:39.752815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485730 ] 00:03:58.142 [2024-11-06 13:00:39.841320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.142 [2024-11-06 13:00:39.877345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:58.142 [2024-11-06 13:00:39.877396] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:58.142 [2024-11-06 13:00:39.877406] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:58.142 [2024-11-06 13:00:39.877412] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1485500 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 1485500 ']' 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 1485500 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1485500 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1485500' 00:03:58.142 killing process with pid 1485500 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 1485500 00:03:58.142 13:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 1485500 00:03:58.403 00:03:58.403 real 0m1.330s 00:03:58.403 user 0m1.548s 00:03:58.403 sys 0m0.391s 00:03:58.403 13:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:58.403 13:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:58.403 ************************************ 00:03:58.403 END TEST exit_on_failed_rpc_init 00:03:58.403 ************************************ 00:03:58.403 13:00:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:58.403 00:03:58.403 real 0m13.729s 00:03:58.403 user 0m13.280s 00:03:58.403 sys 0m1.604s 00:03:58.403 13:00:40 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:58.403 13:00:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.403 ************************************ 00:03:58.403 END TEST skip_rpc 00:03:58.403 ************************************ 00:03:58.403 13:00:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:58.403 13:00:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:58.403 13:00:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:58.403 13:00:40 -- common/autotest_common.sh@10 -- # set +x 00:03:58.403 ************************************ 00:03:58.403 START TEST rpc_client 00:03:58.403 ************************************ 00:03:58.403 13:00:40 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:58.664 * Looking for test storage... 00:03:58.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:58.664 13:00:40 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:58.664 13:00:40 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:03:58.664 13:00:40 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:58.664 13:00:40 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:58.664 13:00:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.664 13:00:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.664 13:00:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.664 13:00:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.664 13:00:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.664 13:00:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.665 13:00:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:58.665 13:00:40 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.665 13:00:40 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:58.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.665 --rc genhtml_branch_coverage=1 00:03:58.665 --rc genhtml_function_coverage=1 00:03:58.665 --rc genhtml_legend=1 00:03:58.665 --rc geninfo_all_blocks=1 00:03:58.665 --rc geninfo_unexecuted_blocks=1 00:03:58.665 00:03:58.665 ' 00:03:58.665 13:00:40 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:58.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.665 --rc genhtml_branch_coverage=1 00:03:58.665 --rc genhtml_function_coverage=1 00:03:58.665 --rc genhtml_legend=1 00:03:58.665 --rc geninfo_all_blocks=1 00:03:58.665 --rc geninfo_unexecuted_blocks=1 00:03:58.665 00:03:58.665 ' 00:03:58.665 13:00:40 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:58.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.665 --rc genhtml_branch_coverage=1 00:03:58.665 --rc genhtml_function_coverage=1 00:03:58.665 --rc genhtml_legend=1 00:03:58.665 --rc geninfo_all_blocks=1 00:03:58.665 --rc geninfo_unexecuted_blocks=1 00:03:58.665 00:03:58.665 ' 00:03:58.665 13:00:40 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:58.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.665 --rc genhtml_branch_coverage=1 00:03:58.665 --rc genhtml_function_coverage=1 00:03:58.665 --rc genhtml_legend=1 00:03:58.665 --rc geninfo_all_blocks=1 00:03:58.665 --rc geninfo_unexecuted_blocks=1 00:03:58.665 00:03:58.665 ' 00:03:58.665 13:00:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:58.665 OK 00:03:58.665 13:00:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:58.665 00:03:58.665 real 0m0.220s 00:03:58.665 user 0m0.129s 00:03:58.665 sys 0m0.104s 00:03:58.665 13:00:40 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:58.665 13:00:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:58.665 ************************************ 00:03:58.665 END TEST rpc_client 00:03:58.665 ************************************ 00:03:58.665 13:00:40 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:58.665 13:00:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:58.665 13:00:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:58.665 13:00:40 -- common/autotest_common.sh@10 -- # set +x 00:03:58.927 ************************************ 00:03:58.927 START TEST json_config 00:03:58.927 ************************************ 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:58.927 13:00:40 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.927 13:00:40 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.927 13:00:40 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.927 13:00:40 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.927 13:00:40 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.927 13:00:40 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.927 13:00:40 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.927 13:00:40 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.927 13:00:40 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.927 13:00:40 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.927 13:00:40 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.927 13:00:40 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:58.927 13:00:40 json_config -- scripts/common.sh@345 -- # : 1 00:03:58.927 13:00:40 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.927 13:00:40 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.927 13:00:40 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:58.927 13:00:40 json_config -- scripts/common.sh@353 -- # local d=1 00:03:58.927 13:00:40 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.927 13:00:40 json_config -- scripts/common.sh@355 -- # echo 1 00:03:58.927 13:00:40 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.927 13:00:40 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:58.927 13:00:40 json_config -- scripts/common.sh@353 -- # local d=2 00:03:58.927 13:00:40 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.927 13:00:40 json_config -- scripts/common.sh@355 -- # echo 2 00:03:58.927 13:00:40 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.927 13:00:40 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.927 13:00:40 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.927 13:00:40 json_config -- scripts/common.sh@368 -- # return 0 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.927 --rc genhtml_branch_coverage=1 00:03:58.927 --rc genhtml_function_coverage=1 00:03:58.927 --rc genhtml_legend=1 00:03:58.927 --rc geninfo_all_blocks=1 00:03:58.927 --rc geninfo_unexecuted_blocks=1 00:03:58.927 00:03:58.927 ' 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.927 --rc genhtml_branch_coverage=1 00:03:58.927 --rc genhtml_function_coverage=1 00:03:58.927 --rc genhtml_legend=1 00:03:58.927 --rc geninfo_all_blocks=1 00:03:58.927 --rc geninfo_unexecuted_blocks=1 00:03:58.927 00:03:58.927 ' 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.927 --rc genhtml_branch_coverage=1 00:03:58.927 --rc genhtml_function_coverage=1 00:03:58.927 --rc genhtml_legend=1 00:03:58.927 --rc geninfo_all_blocks=1 00:03:58.927 --rc geninfo_unexecuted_blocks=1 00:03:58.927 00:03:58.927 ' 00:03:58.927 13:00:40 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:58.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.927 --rc genhtml_branch_coverage=1 00:03:58.927 --rc genhtml_function_coverage=1 00:03:58.927 --rc genhtml_legend=1 00:03:58.927 --rc geninfo_all_blocks=1 00:03:58.927 --rc geninfo_unexecuted_blocks=1 00:03:58.927 00:03:58.927 ' 00:03:58.927 13:00:40 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:58.927 13:00:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:58.928 13:00:40 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:58.928 13:00:40 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:58.928 13:00:40 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:58.928 13:00:40 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:58.928 13:00:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.928 13:00:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.928 13:00:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.928 13:00:40 json_config -- paths/export.sh@5 -- # export PATH 00:03:58.928 13:00:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@51 -- # : 0 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:58.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:58.928 13:00:40 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:58.928 INFO: JSON configuration test init 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.928 13:00:40 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:58.928 13:00:40 json_config -- json_config/common.sh@9 -- # local app=target 00:03:58.928 13:00:40 json_config -- json_config/common.sh@10 -- # shift 00:03:58.928 13:00:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.928 13:00:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.928 13:00:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.928 13:00:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.928 13:00:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.928 13:00:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1485963 00:03:58.928 13:00:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.928 Waiting for target to run... 00:03:58.928 13:00:40 json_config -- json_config/common.sh@25 -- # waitforlisten 1485963 /var/tmp/spdk_tgt.sock 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@833 -- # '[' -z 1485963 ']' 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:58.928 13:00:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:58.928 13:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.189 [2024-11-06 13:00:40.859361] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:03:59.189 [2024-11-06 13:00:40.859424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485963 ] 00:03:59.450 [2024-11-06 13:00:41.254365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.450 [2024-11-06 13:00:41.286915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.021 13:00:41 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:00.021 13:00:41 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:00.021 13:00:41 json_config -- json_config/common.sh@26 -- # echo '' 00:04:00.021 00:04:00.021 13:00:41 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:00.021 13:00:41 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:00.021 13:00:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.021 13:00:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.021 13:00:41 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:00.021 13:00:41 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:00.021 13:00:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.021 13:00:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.021 13:00:41 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:00.021 13:00:41 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:00.021 13:00:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:00.592 13:00:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.592 13:00:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:00.592 13:00:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@54 -- # sort 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:00.592 13:00:42 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:00.593 13:00:42 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:00.593 13:00:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.593 13:00:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.593 13:00:42 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:00.593 13:00:42 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:00.593 13:00:42 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:00.593 13:00:42 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:00.593 13:00:42 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:00.593 13:00:42 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:00.593 13:00:42 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:00.593 13:00:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.593 13:00:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.853 13:00:42 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:00.853 13:00:42 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:00.853 13:00:42 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:00.853 13:00:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:00.853 13:00:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:00.853 MallocForNvmf0 00:04:00.853 13:00:42 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:00.853 13:00:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:01.114 MallocForNvmf1 00:04:01.114 13:00:42 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:01.114 13:00:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:01.114 [2024-11-06 13:00:43.004277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:01.375 13:00:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:01.375 13:00:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:01.375 13:00:43 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:01.375 13:00:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:01.636 13:00:43 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.636 13:00:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.897 13:00:43 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.898 13:00:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.898 [2024-11-06 13:00:43.734468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:01.898 13:00:43 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:01.898 13:00:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.898 13:00:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.158 13:00:43 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:02.158 13:00:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.158 13:00:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.158 13:00:43 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:02.158 13:00:43 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:02.158 13:00:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:02.158 MallocBdevForConfigChangeCheck 00:04:02.158 13:00:44 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:02.158 13:00:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.158 13:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.158 13:00:44 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:02.158 13:00:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.730 13:00:44 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:02.730 INFO: shutting down applications... 00:04:02.730 13:00:44 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:02.730 13:00:44 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:02.730 13:00:44 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:02.730 13:00:44 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:02.991 Calling clear_iscsi_subsystem 00:04:02.991 Calling clear_nvmf_subsystem 00:04:02.991 Calling clear_nbd_subsystem 00:04:02.991 Calling clear_ublk_subsystem 00:04:02.991 Calling clear_vhost_blk_subsystem 00:04:02.991 Calling clear_vhost_scsi_subsystem 00:04:02.991 Calling clear_bdev_subsystem 00:04:02.991 13:00:44 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:02.991 13:00:44 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:02.991 13:00:44 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:02.991 13:00:44 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.992 13:00:44 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:02.992 13:00:44 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:03.563 13:00:45 json_config -- json_config/json_config.sh@352 -- # break 00:04:03.563 13:00:45 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:03.563 13:00:45 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:03.563 13:00:45 json_config -- json_config/common.sh@31 -- # local app=target 00:04:03.563 13:00:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:03.563 13:00:45 json_config -- json_config/common.sh@35 -- # [[ -n 1485963 ]] 00:04:03.563 13:00:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1485963 00:04:03.563 13:00:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:03.563 13:00:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.563 13:00:45 json_config -- json_config/common.sh@41 -- # kill -0 1485963 00:04:03.563 13:00:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:03.824 13:00:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:03.824 13:00:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:03.824 13:00:45 json_config -- json_config/common.sh@41 -- # kill -0 1485963 00:04:03.824 13:00:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:03.824 13:00:45 json_config -- json_config/common.sh@43 -- # break 00:04:03.824 13:00:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:03.824 13:00:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:03.824 SPDK target shutdown done 00:04:03.824 13:00:45 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:03.824 INFO: relaunching applications... 00:04:03.824 13:00:45 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.824 13:00:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:03.824 13:00:45 json_config -- json_config/common.sh@10 -- # shift 00:04:03.824 13:00:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:03.824 13:00:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:03.824 13:00:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:03.824 13:00:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.824 13:00:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:03.824 13:00:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1487103 00:04:03.824 13:00:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:03.824 Waiting for target to run... 00:04:03.824 13:00:45 json_config -- json_config/common.sh@25 -- # waitforlisten 1487103 /var/tmp/spdk_tgt.sock 00:04:03.824 13:00:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.824 13:00:45 json_config -- common/autotest_common.sh@833 -- # '[' -z 1487103 ']' 00:04:03.824 13:00:45 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:03.824 13:00:45 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:03.824 13:00:45 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:03.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:03.824 13:00:45 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:03.824 13:00:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.085 [2024-11-06 13:00:45.771179] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:04.085 [2024-11-06 13:00:45.771234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487103 ] 00:04:04.345 [2024-11-06 13:00:46.129575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.346 [2024-11-06 13:00:46.163068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.918 [2024-11-06 13:00:46.662495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:04.918 [2024-11-06 13:00:46.694884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:04.918 13:00:46 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:04.918 13:00:46 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:04.918 13:00:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:04.918 00:04:04.918 13:00:46 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:04.918 13:00:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:04.918 INFO: Checking if target configuration is the same... 00:04:04.918 13:00:46 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.918 13:00:46 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:04.918 13:00:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:04.918 + '[' 2 -ne 2 ']' 00:04:04.918 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:04.918 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:04.918 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.918 +++ basename /dev/fd/62 00:04:04.918 ++ mktemp /tmp/62.XXX 00:04:04.918 + tmp_file_1=/tmp/62.hi9 00:04:04.918 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.918 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:04.918 + tmp_file_2=/tmp/spdk_tgt_config.json.bHB 00:04:04.918 + ret=0 00:04:04.918 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:05.180 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:05.441 + diff -u /tmp/62.hi9 /tmp/spdk_tgt_config.json.bHB 00:04:05.441 + echo 'INFO: JSON config files are the same' 00:04:05.441 INFO: JSON config files are the same 00:04:05.441 + rm /tmp/62.hi9 /tmp/spdk_tgt_config.json.bHB 00:04:05.441 + exit 0 00:04:05.441 13:00:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:05.441 13:00:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:05.441 INFO: changing configuration and checking if this can be detected... 00:04:05.441 13:00:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:05.441 13:00:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:05.441 13:00:47 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.441 13:00:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:05.441 13:00:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.441 + '[' 2 -ne 2 ']' 00:04:05.441 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:05.441 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:05.441 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:05.441 +++ basename /dev/fd/62 00:04:05.441 ++ mktemp /tmp/62.XXX 00:04:05.441 + tmp_file_1=/tmp/62.GSP 00:04:05.441 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.441 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:05.441 + tmp_file_2=/tmp/spdk_tgt_config.json.ruZ 00:04:05.441 + ret=0 00:04:05.441 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.013 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.013 + diff -u /tmp/62.GSP /tmp/spdk_tgt_config.json.ruZ 00:04:06.013 + ret=1 00:04:06.013 + echo '=== Start of file: /tmp/62.GSP ===' 00:04:06.013 + cat /tmp/62.GSP 00:04:06.013 + echo '=== End of file: /tmp/62.GSP ===' 00:04:06.013 + echo '' 00:04:06.013 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ruZ ===' 00:04:06.013 + cat /tmp/spdk_tgt_config.json.ruZ 00:04:06.013 + echo '=== End of file: /tmp/spdk_tgt_config.json.ruZ ===' 00:04:06.013 + echo '' 00:04:06.013 + rm /tmp/62.GSP /tmp/spdk_tgt_config.json.ruZ 00:04:06.013 + exit 1 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:06.013 INFO: configuration change detected. 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@324 -- # [[ -n 1487103 ]] 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.013 13:00:47 json_config -- json_config/json_config.sh@330 -- # killprocess 1487103 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@952 -- # '[' -z 1487103 ']' 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@956 -- # kill -0 1487103 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@957 -- # uname 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1487103 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1487103' 00:04:06.013 killing process with pid 1487103 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@971 -- # kill 1487103 00:04:06.013 13:00:47 json_config -- common/autotest_common.sh@976 -- # wait 1487103 00:04:06.274 13:00:48 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.274 13:00:48 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:06.274 13:00:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.275 13:00:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.275 13:00:48 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:06.275 13:00:48 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:06.275 INFO: Success 00:04:06.275 00:04:06.275 real 0m7.543s 00:04:06.275 user 0m9.065s 00:04:06.275 sys 0m2.083s 00:04:06.275 13:00:48 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:06.275 13:00:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.275 ************************************ 00:04:06.275 END TEST json_config 00:04:06.275 ************************************ 00:04:06.275 13:00:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.275 13:00:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:06.275 13:00:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:06.275 13:00:48 -- common/autotest_common.sh@10 -- # set +x 00:04:06.536 ************************************ 00:04:06.536 START TEST json_config_extra_key 00:04:06.536 ************************************ 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.536 13:00:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.536 --rc genhtml_branch_coverage=1 00:04:06.536 --rc genhtml_function_coverage=1 00:04:06.536 --rc genhtml_legend=1 00:04:06.536 --rc geninfo_all_blocks=1 00:04:06.536 --rc geninfo_unexecuted_blocks=1 00:04:06.536 00:04:06.536 ' 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.536 --rc genhtml_branch_coverage=1 00:04:06.536 --rc genhtml_function_coverage=1 00:04:06.536 --rc genhtml_legend=1 00:04:06.536 --rc geninfo_all_blocks=1 00:04:06.536 --rc geninfo_unexecuted_blocks=1 00:04:06.536 00:04:06.536 ' 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.536 --rc genhtml_branch_coverage=1 00:04:06.536 --rc genhtml_function_coverage=1 00:04:06.536 --rc genhtml_legend=1 00:04:06.536 --rc geninfo_all_blocks=1 00:04:06.536 --rc geninfo_unexecuted_blocks=1 00:04:06.536 00:04:06.536 ' 00:04:06.536 13:00:48 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.537 --rc genhtml_branch_coverage=1 00:04:06.537 --rc genhtml_function_coverage=1 00:04:06.537 --rc genhtml_legend=1 00:04:06.537 --rc geninfo_all_blocks=1 00:04:06.537 --rc geninfo_unexecuted_blocks=1 00:04:06.537 00:04:06.537 ' 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:06.537 13:00:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:06.537 13:00:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.537 13:00:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.537 13:00:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.537 13:00:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.537 13:00:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.537 13:00:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.537 13:00:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:06.537 13:00:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:06.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:06.537 13:00:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:06.537 INFO: launching applications... 00:04:06.537 13:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1487892 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.537 Waiting for target to run... 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1487892 /var/tmp/spdk_tgt.sock 00:04:06.537 13:00:48 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 1487892 ']' 00:04:06.537 13:00:48 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.537 13:00:48 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:06.537 13:00:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.537 13:00:48 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.537 13:00:48 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:06.537 13:00:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:06.798 [2024-11-06 13:00:48.479881] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:06.798 [2024-11-06 13:00:48.479956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487892 ] 00:04:07.058 [2024-11-06 13:00:48.808210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.058 [2024-11-06 13:00:48.833735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.628 13:00:49 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:07.628 13:00:49 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:07.628 13:00:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:07.628 00:04:07.629 13:00:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:07.629 INFO: shutting down applications... 00:04:07.629 13:00:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:07.629 13:00:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:07.629 13:00:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:07.629 13:00:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1487892 ]] 00:04:07.629 13:00:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1487892 00:04:07.629 13:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:07.629 13:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.629 13:00:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1487892 00:04:07.629 13:00:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:07.889 13:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:07.889 13:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.889 13:00:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1487892 00:04:07.889 13:00:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:07.889 13:00:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:07.889 13:00:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:07.889 13:00:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:07.889 SPDK target shutdown done 00:04:07.889 13:00:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:07.889 Success 00:04:07.889 00:04:07.889 real 0m1.584s 00:04:07.889 user 0m1.184s 00:04:07.889 sys 0m0.436s 00:04:08.150 13:00:49 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:08.150 13:00:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:08.150 ************************************ 00:04:08.150 END TEST json_config_extra_key 00:04:08.150 ************************************ 00:04:08.150 13:00:49 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:08.150 13:00:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.150 13:00:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.150 13:00:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.150 ************************************ 00:04:08.150 START TEST alias_rpc 00:04:08.151 ************************************ 00:04:08.151 13:00:49 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:08.151 * Looking for test storage... 00:04:08.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:08.151 13:00:49 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.151 13:00:49 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.151 13:00:49 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.412 13:00:50 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.412 13:00:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.413 --rc genhtml_branch_coverage=1 00:04:08.413 --rc genhtml_function_coverage=1 00:04:08.413 --rc genhtml_legend=1 00:04:08.413 --rc geninfo_all_blocks=1 00:04:08.413 --rc geninfo_unexecuted_blocks=1 00:04:08.413 00:04:08.413 ' 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.413 --rc genhtml_branch_coverage=1 00:04:08.413 --rc genhtml_function_coverage=1 00:04:08.413 --rc genhtml_legend=1 00:04:08.413 --rc geninfo_all_blocks=1 00:04:08.413 --rc geninfo_unexecuted_blocks=1 00:04:08.413 00:04:08.413 ' 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.413 --rc genhtml_branch_coverage=1 00:04:08.413 --rc genhtml_function_coverage=1 00:04:08.413 --rc genhtml_legend=1 00:04:08.413 --rc geninfo_all_blocks=1 00:04:08.413 --rc geninfo_unexecuted_blocks=1 00:04:08.413 00:04:08.413 ' 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.413 --rc genhtml_branch_coverage=1 00:04:08.413 --rc genhtml_function_coverage=1 00:04:08.413 --rc genhtml_legend=1 00:04:08.413 --rc geninfo_all_blocks=1 00:04:08.413 --rc geninfo_unexecuted_blocks=1 00:04:08.413 00:04:08.413 ' 00:04:08.413 13:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:08.413 13:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1488281 00:04:08.413 13:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1488281 00:04:08.413 13:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 1488281 ']' 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:08.413 13:00:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.413 [2024-11-06 13:00:50.142115] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:08.413 [2024-11-06 13:00:50.142188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488281 ] 00:04:08.413 [2024-11-06 13:00:50.230422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.413 [2024-11-06 13:00:50.270103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.356 13:00:50 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:09.356 13:00:50 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:09.356 13:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:09.356 13:00:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1488281 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 1488281 ']' 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 1488281 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1488281 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1488281' 00:04:09.356 killing process with pid 1488281 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@971 -- # kill 1488281 00:04:09.356 13:00:51 alias_rpc -- common/autotest_common.sh@976 -- # wait 1488281 00:04:09.616 00:04:09.616 real 0m1.524s 00:04:09.616 user 0m1.660s 00:04:09.616 sys 0m0.447s 00:04:09.616 13:00:51 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.616 13:00:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.616 ************************************ 00:04:09.616 END TEST alias_rpc 00:04:09.616 ************************************ 00:04:09.616 13:00:51 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:09.616 13:00:51 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:09.616 13:00:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.617 13:00:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.617 13:00:51 -- common/autotest_common.sh@10 -- # set +x 00:04:09.617 ************************************ 00:04:09.617 START TEST spdkcli_tcp 00:04:09.617 ************************************ 00:04:09.617 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:09.878 * Looking for test storage... 00:04:09.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.878 13:00:51 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:09.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.878 --rc genhtml_branch_coverage=1 00:04:09.878 --rc genhtml_function_coverage=1 00:04:09.878 --rc genhtml_legend=1 00:04:09.878 --rc geninfo_all_blocks=1 00:04:09.878 --rc geninfo_unexecuted_blocks=1 00:04:09.878 00:04:09.878 ' 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:09.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.878 --rc genhtml_branch_coverage=1 00:04:09.878 --rc genhtml_function_coverage=1 00:04:09.878 --rc genhtml_legend=1 00:04:09.878 --rc geninfo_all_blocks=1 00:04:09.878 --rc geninfo_unexecuted_blocks=1 00:04:09.878 00:04:09.878 ' 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:09.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.878 --rc genhtml_branch_coverage=1 00:04:09.878 --rc genhtml_function_coverage=1 00:04:09.878 --rc genhtml_legend=1 00:04:09.878 --rc geninfo_all_blocks=1 00:04:09.878 --rc geninfo_unexecuted_blocks=1 00:04:09.878 00:04:09.878 ' 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:09.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.878 --rc genhtml_branch_coverage=1 00:04:09.878 --rc genhtml_function_coverage=1 00:04:09.878 --rc genhtml_legend=1 00:04:09.878 --rc geninfo_all_blocks=1 00:04:09.878 --rc geninfo_unexecuted_blocks=1 00:04:09.878 00:04:09.878 ' 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1488621 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1488621 00:04:09.878 13:00:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 1488621 ']' 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:09.878 13:00:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.878 [2024-11-06 13:00:51.742713] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:09.878 [2024-11-06 13:00:51.742793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488621 ] 00:04:10.139 [2024-11-06 13:00:51.831875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:10.140 [2024-11-06 13:00:51.868028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.140 [2024-11-06 13:00:51.868121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.710 13:00:52 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:10.710 13:00:52 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:10.710 13:00:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1488700 00:04:10.710 13:00:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:10.710 13:00:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:10.971 [ 00:04:10.971 "bdev_malloc_delete", 00:04:10.971 "bdev_malloc_create", 00:04:10.971 "bdev_null_resize", 00:04:10.971 "bdev_null_delete", 00:04:10.971 "bdev_null_create", 00:04:10.971 "bdev_nvme_cuse_unregister", 00:04:10.971 "bdev_nvme_cuse_register", 00:04:10.971 "bdev_opal_new_user", 00:04:10.971 "bdev_opal_set_lock_state", 00:04:10.971 "bdev_opal_delete", 00:04:10.971 "bdev_opal_get_info", 00:04:10.971 "bdev_opal_create", 00:04:10.971 "bdev_nvme_opal_revert", 00:04:10.971 "bdev_nvme_opal_init", 00:04:10.971 "bdev_nvme_send_cmd", 00:04:10.971 "bdev_nvme_set_keys", 00:04:10.971 "bdev_nvme_get_path_iostat", 00:04:10.971 "bdev_nvme_get_mdns_discovery_info", 00:04:10.971 "bdev_nvme_stop_mdns_discovery", 00:04:10.971 "bdev_nvme_start_mdns_discovery", 00:04:10.971 "bdev_nvme_set_multipath_policy", 00:04:10.971 "bdev_nvme_set_preferred_path", 00:04:10.971 "bdev_nvme_get_io_paths", 00:04:10.971 "bdev_nvme_remove_error_injection", 00:04:10.971 "bdev_nvme_add_error_injection", 00:04:10.971 "bdev_nvme_get_discovery_info", 00:04:10.971 "bdev_nvme_stop_discovery", 00:04:10.971 "bdev_nvme_start_discovery", 00:04:10.971 "bdev_nvme_get_controller_health_info", 00:04:10.971 "bdev_nvme_disable_controller", 00:04:10.971 "bdev_nvme_enable_controller", 00:04:10.971 "bdev_nvme_reset_controller", 00:04:10.971 "bdev_nvme_get_transport_statistics", 00:04:10.971 "bdev_nvme_apply_firmware", 00:04:10.971 "bdev_nvme_detach_controller", 00:04:10.971 "bdev_nvme_get_controllers", 00:04:10.971 "bdev_nvme_attach_controller", 00:04:10.971 "bdev_nvme_set_hotplug", 00:04:10.971 "bdev_nvme_set_options", 00:04:10.971 "bdev_passthru_delete", 00:04:10.971 "bdev_passthru_create", 00:04:10.971 "bdev_lvol_set_parent_bdev", 00:04:10.971 "bdev_lvol_set_parent", 00:04:10.971 "bdev_lvol_check_shallow_copy", 00:04:10.971 "bdev_lvol_start_shallow_copy", 00:04:10.971 "bdev_lvol_grow_lvstore", 00:04:10.971 "bdev_lvol_get_lvols", 00:04:10.971 "bdev_lvol_get_lvstores", 00:04:10.971 "bdev_lvol_delete", 00:04:10.971 "bdev_lvol_set_read_only", 00:04:10.971 "bdev_lvol_resize", 00:04:10.971 "bdev_lvol_decouple_parent", 00:04:10.971 "bdev_lvol_inflate", 00:04:10.971 "bdev_lvol_rename", 00:04:10.971 "bdev_lvol_clone_bdev", 00:04:10.971 "bdev_lvol_clone", 00:04:10.972 "bdev_lvol_snapshot", 00:04:10.972 "bdev_lvol_create", 00:04:10.972 "bdev_lvol_delete_lvstore", 00:04:10.972 "bdev_lvol_rename_lvstore", 00:04:10.972 "bdev_lvol_create_lvstore", 00:04:10.972 "bdev_raid_set_options", 00:04:10.972 "bdev_raid_remove_base_bdev", 00:04:10.972 "bdev_raid_add_base_bdev", 00:04:10.972 "bdev_raid_delete", 00:04:10.972 "bdev_raid_create", 00:04:10.972 "bdev_raid_get_bdevs", 00:04:10.972 "bdev_error_inject_error", 00:04:10.972 "bdev_error_delete", 00:04:10.972 "bdev_error_create", 00:04:10.972 "bdev_split_delete", 00:04:10.972 "bdev_split_create", 00:04:10.972 "bdev_delay_delete", 00:04:10.972 "bdev_delay_create", 00:04:10.972 "bdev_delay_update_latency", 00:04:10.972 "bdev_zone_block_delete", 00:04:10.972 "bdev_zone_block_create", 00:04:10.972 "blobfs_create", 00:04:10.972 "blobfs_detect", 00:04:10.972 "blobfs_set_cache_size", 00:04:10.972 "bdev_aio_delete", 00:04:10.972 "bdev_aio_rescan", 00:04:10.972 "bdev_aio_create", 00:04:10.972 "bdev_ftl_set_property", 00:04:10.972 "bdev_ftl_get_properties", 00:04:10.972 "bdev_ftl_get_stats", 00:04:10.972 "bdev_ftl_unmap", 00:04:10.972 "bdev_ftl_unload", 00:04:10.972 "bdev_ftl_delete", 00:04:10.972 "bdev_ftl_load", 00:04:10.972 "bdev_ftl_create", 00:04:10.972 "bdev_virtio_attach_controller", 00:04:10.972 "bdev_virtio_scsi_get_devices", 00:04:10.972 "bdev_virtio_detach_controller", 00:04:10.972 "bdev_virtio_blk_set_hotplug", 00:04:10.972 "bdev_iscsi_delete", 00:04:10.972 "bdev_iscsi_create", 00:04:10.972 "bdev_iscsi_set_options", 00:04:10.972 "accel_error_inject_error", 00:04:10.972 "ioat_scan_accel_module", 00:04:10.972 "dsa_scan_accel_module", 00:04:10.972 "iaa_scan_accel_module", 00:04:10.972 "vfu_virtio_create_fs_endpoint", 00:04:10.972 "vfu_virtio_create_scsi_endpoint", 00:04:10.972 "vfu_virtio_scsi_remove_target", 00:04:10.972 "vfu_virtio_scsi_add_target", 00:04:10.972 "vfu_virtio_create_blk_endpoint", 00:04:10.972 "vfu_virtio_delete_endpoint", 00:04:10.972 "keyring_file_remove_key", 00:04:10.972 "keyring_file_add_key", 00:04:10.972 "keyring_linux_set_options", 00:04:10.972 "fsdev_aio_delete", 00:04:10.972 "fsdev_aio_create", 00:04:10.972 "iscsi_get_histogram", 00:04:10.972 "iscsi_enable_histogram", 00:04:10.972 "iscsi_set_options", 00:04:10.972 "iscsi_get_auth_groups", 00:04:10.972 "iscsi_auth_group_remove_secret", 00:04:10.972 "iscsi_auth_group_add_secret", 00:04:10.972 "iscsi_delete_auth_group", 00:04:10.972 "iscsi_create_auth_group", 00:04:10.972 "iscsi_set_discovery_auth", 00:04:10.972 "iscsi_get_options", 00:04:10.972 "iscsi_target_node_request_logout", 00:04:10.972 "iscsi_target_node_set_redirect", 00:04:10.972 "iscsi_target_node_set_auth", 00:04:10.972 "iscsi_target_node_add_lun", 00:04:10.972 "iscsi_get_stats", 00:04:10.972 "iscsi_get_connections", 00:04:10.972 "iscsi_portal_group_set_auth", 00:04:10.972 "iscsi_start_portal_group", 00:04:10.972 "iscsi_delete_portal_group", 00:04:10.972 "iscsi_create_portal_group", 00:04:10.972 "iscsi_get_portal_groups", 00:04:10.972 "iscsi_delete_target_node", 00:04:10.972 "iscsi_target_node_remove_pg_ig_maps", 00:04:10.972 "iscsi_target_node_add_pg_ig_maps", 00:04:10.972 "iscsi_create_target_node", 00:04:10.972 "iscsi_get_target_nodes", 00:04:10.972 "iscsi_delete_initiator_group", 00:04:10.972 "iscsi_initiator_group_remove_initiators", 00:04:10.972 "iscsi_initiator_group_add_initiators", 00:04:10.972 "iscsi_create_initiator_group", 00:04:10.972 "iscsi_get_initiator_groups", 00:04:10.972 "nvmf_set_crdt", 00:04:10.972 "nvmf_set_config", 00:04:10.972 "nvmf_set_max_subsystems", 00:04:10.972 "nvmf_stop_mdns_prr", 00:04:10.972 "nvmf_publish_mdns_prr", 00:04:10.972 "nvmf_subsystem_get_listeners", 00:04:10.972 "nvmf_subsystem_get_qpairs", 00:04:10.972 "nvmf_subsystem_get_controllers", 00:04:10.972 "nvmf_get_stats", 00:04:10.972 "nvmf_get_transports", 00:04:10.972 "nvmf_create_transport", 00:04:10.972 "nvmf_get_targets", 00:04:10.972 "nvmf_delete_target", 00:04:10.972 "nvmf_create_target", 00:04:10.972 "nvmf_subsystem_allow_any_host", 00:04:10.972 "nvmf_subsystem_set_keys", 00:04:10.972 "nvmf_subsystem_remove_host", 00:04:10.972 "nvmf_subsystem_add_host", 00:04:10.972 "nvmf_ns_remove_host", 00:04:10.972 "nvmf_ns_add_host", 00:04:10.972 "nvmf_subsystem_remove_ns", 00:04:10.972 "nvmf_subsystem_set_ns_ana_group", 00:04:10.972 "nvmf_subsystem_add_ns", 00:04:10.972 "nvmf_subsystem_listener_set_ana_state", 00:04:10.972 "nvmf_discovery_get_referrals", 00:04:10.972 "nvmf_discovery_remove_referral", 00:04:10.972 "nvmf_discovery_add_referral", 00:04:10.972 "nvmf_subsystem_remove_listener", 00:04:10.972 "nvmf_subsystem_add_listener", 00:04:10.972 "nvmf_delete_subsystem", 00:04:10.972 "nvmf_create_subsystem", 00:04:10.972 "nvmf_get_subsystems", 00:04:10.972 "env_dpdk_get_mem_stats", 00:04:10.972 "nbd_get_disks", 00:04:10.972 "nbd_stop_disk", 00:04:10.972 "nbd_start_disk", 00:04:10.972 "ublk_recover_disk", 00:04:10.972 "ublk_get_disks", 00:04:10.972 "ublk_stop_disk", 00:04:10.972 "ublk_start_disk", 00:04:10.972 "ublk_destroy_target", 00:04:10.972 "ublk_create_target", 00:04:10.972 "virtio_blk_create_transport", 00:04:10.972 "virtio_blk_get_transports", 00:04:10.972 "vhost_controller_set_coalescing", 00:04:10.972 "vhost_get_controllers", 00:04:10.972 "vhost_delete_controller", 00:04:10.972 "vhost_create_blk_controller", 00:04:10.972 "vhost_scsi_controller_remove_target", 00:04:10.972 "vhost_scsi_controller_add_target", 00:04:10.972 "vhost_start_scsi_controller", 00:04:10.972 "vhost_create_scsi_controller", 00:04:10.972 "thread_set_cpumask", 00:04:10.972 "scheduler_set_options", 00:04:10.972 "framework_get_governor", 00:04:10.972 "framework_get_scheduler", 00:04:10.972 "framework_set_scheduler", 00:04:10.972 "framework_get_reactors", 00:04:10.972 "thread_get_io_channels", 00:04:10.972 "thread_get_pollers", 00:04:10.972 "thread_get_stats", 00:04:10.972 "framework_monitor_context_switch", 00:04:10.972 "spdk_kill_instance", 00:04:10.972 "log_enable_timestamps", 00:04:10.972 "log_get_flags", 00:04:10.972 "log_clear_flag", 00:04:10.972 "log_set_flag", 00:04:10.972 "log_get_level", 00:04:10.972 "log_set_level", 00:04:10.972 "log_get_print_level", 00:04:10.972 "log_set_print_level", 00:04:10.972 "framework_enable_cpumask_locks", 00:04:10.972 "framework_disable_cpumask_locks", 00:04:10.972 "framework_wait_init", 00:04:10.972 "framework_start_init", 00:04:10.972 "scsi_get_devices", 00:04:10.972 "bdev_get_histogram", 00:04:10.972 "bdev_enable_histogram", 00:04:10.972 "bdev_set_qos_limit", 00:04:10.972 "bdev_set_qd_sampling_period", 00:04:10.972 "bdev_get_bdevs", 00:04:10.972 "bdev_reset_iostat", 00:04:10.972 "bdev_get_iostat", 00:04:10.972 "bdev_examine", 00:04:10.972 "bdev_wait_for_examine", 00:04:10.972 "bdev_set_options", 00:04:10.972 "accel_get_stats", 00:04:10.972 "accel_set_options", 00:04:10.972 "accel_set_driver", 00:04:10.972 "accel_crypto_key_destroy", 00:04:10.972 "accel_crypto_keys_get", 00:04:10.972 "accel_crypto_key_create", 00:04:10.972 "accel_assign_opc", 00:04:10.972 "accel_get_module_info", 00:04:10.972 "accel_get_opc_assignments", 00:04:10.972 "vmd_rescan", 00:04:10.972 "vmd_remove_device", 00:04:10.972 "vmd_enable", 00:04:10.972 "sock_get_default_impl", 00:04:10.972 "sock_set_default_impl", 00:04:10.972 "sock_impl_set_options", 00:04:10.972 "sock_impl_get_options", 00:04:10.972 "iobuf_get_stats", 00:04:10.972 "iobuf_set_options", 00:04:10.972 "keyring_get_keys", 00:04:10.972 "vfu_tgt_set_base_path", 00:04:10.972 "framework_get_pci_devices", 00:04:10.972 "framework_get_config", 00:04:10.972 "framework_get_subsystems", 00:04:10.972 "fsdev_set_opts", 00:04:10.972 "fsdev_get_opts", 00:04:10.972 "trace_get_info", 00:04:10.972 "trace_get_tpoint_group_mask", 00:04:10.972 "trace_disable_tpoint_group", 00:04:10.972 "trace_enable_tpoint_group", 00:04:10.972 "trace_clear_tpoint_mask", 00:04:10.972 "trace_set_tpoint_mask", 00:04:10.972 "notify_get_notifications", 00:04:10.972 "notify_get_types", 00:04:10.972 "spdk_get_version", 00:04:10.972 "rpc_get_methods" 00:04:10.972 ] 00:04:10.972 13:00:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:10.972 13:00:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:10.972 13:00:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1488621 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 1488621 ']' 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 1488621 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1488621 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1488621' 00:04:10.972 killing process with pid 1488621 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 1488621 00:04:10.972 13:00:52 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 1488621 00:04:11.233 00:04:11.233 real 0m1.528s 00:04:11.233 user 0m2.782s 00:04:11.233 sys 0m0.462s 00:04:11.233 13:00:53 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:11.233 13:00:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:11.233 ************************************ 00:04:11.233 END TEST spdkcli_tcp 00:04:11.233 ************************************ 00:04:11.233 13:00:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:11.233 13:00:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:11.233 13:00:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:11.233 13:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:11.233 ************************************ 00:04:11.233 START TEST dpdk_mem_utility 00:04:11.233 ************************************ 00:04:11.233 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:11.495 * Looking for test storage... 00:04:11.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.495 13:00:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.495 --rc genhtml_branch_coverage=1 00:04:11.495 --rc genhtml_function_coverage=1 00:04:11.495 --rc genhtml_legend=1 00:04:11.495 --rc geninfo_all_blocks=1 00:04:11.495 --rc geninfo_unexecuted_blocks=1 00:04:11.495 00:04:11.495 ' 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.495 --rc genhtml_branch_coverage=1 00:04:11.495 --rc genhtml_function_coverage=1 00:04:11.495 --rc genhtml_legend=1 00:04:11.495 --rc geninfo_all_blocks=1 00:04:11.495 --rc geninfo_unexecuted_blocks=1 00:04:11.495 00:04:11.495 ' 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.495 --rc genhtml_branch_coverage=1 00:04:11.495 --rc genhtml_function_coverage=1 00:04:11.495 --rc genhtml_legend=1 00:04:11.495 --rc geninfo_all_blocks=1 00:04:11.495 --rc geninfo_unexecuted_blocks=1 00:04:11.495 00:04:11.495 ' 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.495 --rc genhtml_branch_coverage=1 00:04:11.495 --rc genhtml_function_coverage=1 00:04:11.495 --rc genhtml_legend=1 00:04:11.495 --rc geninfo_all_blocks=1 00:04:11.495 --rc geninfo_unexecuted_blocks=1 00:04:11.495 00:04:11.495 ' 00:04:11.495 13:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:11.495 13:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1488985 00:04:11.495 13:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.495 13:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1488985 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 1488985 ']' 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.495 13:00:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:11.495 [2024-11-06 13:00:53.322711] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:11.495 [2024-11-06 13:00:53.322790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488985 ] 00:04:11.755 [2024-11-06 13:00:53.428228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.755 [2024-11-06 13:00:53.461289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.328 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:12.328 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:12.328 13:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:12.328 13:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:12.328 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.328 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:12.328 { 00:04:12.328 "filename": "/tmp/spdk_mem_dump.txt" 00:04:12.328 } 00:04:12.328 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.328 13:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:12.328 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:12.328 1 heaps totaling size 810.000000 MiB 00:04:12.328 size: 810.000000 MiB heap id: 0 00:04:12.328 end heaps---------- 00:04:12.328 9 mempools totaling size 595.772034 MiB 00:04:12.328 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:12.328 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:12.328 size: 92.545471 MiB name: bdev_io_1488985 00:04:12.328 size: 50.003479 MiB name: msgpool_1488985 00:04:12.328 size: 36.509338 MiB name: fsdev_io_1488985 00:04:12.328 size: 21.763794 MiB name: PDU_Pool 00:04:12.328 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:12.328 size: 4.133484 MiB name: evtpool_1488985 00:04:12.328 size: 0.026123 MiB name: Session_Pool 00:04:12.328 end mempools------- 00:04:12.328 6 memzones totaling size 4.142822 MiB 00:04:12.328 size: 1.000366 MiB name: RG_ring_0_1488985 00:04:12.328 size: 1.000366 MiB name: RG_ring_1_1488985 00:04:12.328 size: 1.000366 MiB name: RG_ring_4_1488985 00:04:12.328 size: 1.000366 MiB name: RG_ring_5_1488985 00:04:12.328 size: 0.125366 MiB name: RG_ring_2_1488985 00:04:12.328 size: 0.015991 MiB name: RG_ring_3_1488985 00:04:12.328 end memzones------- 00:04:12.328 13:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:12.328 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:12.328 list of free elements. size: 10.862488 MiB 00:04:12.328 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:12.328 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:12.328 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:12.328 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:12.328 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:12.328 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:12.328 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:12.328 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:12.328 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:12.328 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:12.328 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:12.328 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:12.328 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:12.328 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:12.328 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:12.328 list of standard malloc elements. size: 199.218628 MiB 00:04:12.328 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:12.328 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:12.328 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:12.328 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:12.328 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:12.328 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:12.328 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:12.328 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:12.328 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:12.328 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:12.328 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:12.328 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:12.328 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:12.328 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:12.328 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:12.328 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:12.328 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:12.328 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:12.328 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:12.328 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:12.328 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:12.328 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:12.328 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:12.328 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:12.328 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:12.328 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:12.328 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:12.328 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:12.328 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:12.329 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:12.329 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:12.329 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:12.329 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:12.329 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:12.329 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:12.329 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:12.329 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:12.329 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:12.329 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:12.329 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:12.329 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:12.329 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:12.329 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:12.329 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:12.329 list of memzone associated elements. size: 599.918884 MiB 00:04:12.329 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:12.329 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:12.329 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:12.329 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:12.329 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:12.329 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1488985_0 00:04:12.329 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:12.329 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1488985_0 00:04:12.329 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:12.329 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1488985_0 00:04:12.329 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:12.329 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:12.329 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:12.329 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:12.329 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:12.329 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1488985_0 00:04:12.329 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:12.329 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1488985 00:04:12.329 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:12.329 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1488985 00:04:12.329 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:12.329 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:12.329 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:12.329 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:12.329 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:12.329 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:12.329 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:12.329 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:12.329 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:12.329 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1488985 00:04:12.329 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:12.329 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1488985 00:04:12.329 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:12.329 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1488985 00:04:12.329 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:12.329 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1488985 00:04:12.329 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:12.329 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1488985 00:04:12.329 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:12.329 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1488985 00:04:12.329 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:12.329 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:12.329 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:12.329 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:12.329 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:12.329 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:12.329 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:12.329 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1488985 00:04:12.329 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:12.329 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1488985 00:04:12.329 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:12.329 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:12.329 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:12.329 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:12.329 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:12.329 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1488985 00:04:12.329 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:12.329 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:12.329 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:12.329 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1488985 00:04:12.330 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:12.330 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1488985 00:04:12.330 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:12.330 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1488985 00:04:12.330 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:12.330 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:12.330 13:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:12.330 13:00:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1488985 00:04:12.330 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 1488985 ']' 00:04:12.330 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 1488985 00:04:12.330 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:12.330 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:12.330 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1488985 00:04:12.589 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:12.589 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:12.589 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1488985' 00:04:12.589 killing process with pid 1488985 00:04:12.589 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 1488985 00:04:12.589 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 1488985 00:04:12.589 00:04:12.589 real 0m1.376s 00:04:12.589 user 0m1.423s 00:04:12.589 sys 0m0.413s 00:04:12.589 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.589 13:00:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:12.589 ************************************ 00:04:12.589 END TEST dpdk_mem_utility 00:04:12.589 ************************************ 00:04:12.589 13:00:54 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:12.589 13:00:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.589 13:00:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.850 13:00:54 -- common/autotest_common.sh@10 -- # set +x 00:04:12.850 ************************************ 00:04:12.850 START TEST event 00:04:12.850 ************************************ 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:12.850 * Looking for test storage... 00:04:12.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:12.850 13:00:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.850 13:00:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.850 13:00:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.850 13:00:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.850 13:00:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.850 13:00:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.850 13:00:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.850 13:00:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.850 13:00:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.850 13:00:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.850 13:00:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.850 13:00:54 event -- scripts/common.sh@344 -- # case "$op" in 00:04:12.850 13:00:54 event -- scripts/common.sh@345 -- # : 1 00:04:12.850 13:00:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.850 13:00:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.850 13:00:54 event -- scripts/common.sh@365 -- # decimal 1 00:04:12.850 13:00:54 event -- scripts/common.sh@353 -- # local d=1 00:04:12.850 13:00:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.850 13:00:54 event -- scripts/common.sh@355 -- # echo 1 00:04:12.850 13:00:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.850 13:00:54 event -- scripts/common.sh@366 -- # decimal 2 00:04:12.850 13:00:54 event -- scripts/common.sh@353 -- # local d=2 00:04:12.850 13:00:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.850 13:00:54 event -- scripts/common.sh@355 -- # echo 2 00:04:12.850 13:00:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.850 13:00:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.850 13:00:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.850 13:00:54 event -- scripts/common.sh@368 -- # return 0 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.850 --rc genhtml_branch_coverage=1 00:04:12.850 --rc genhtml_function_coverage=1 00:04:12.850 --rc genhtml_legend=1 00:04:12.850 --rc geninfo_all_blocks=1 00:04:12.850 --rc geninfo_unexecuted_blocks=1 00:04:12.850 00:04:12.850 ' 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.850 --rc genhtml_branch_coverage=1 00:04:12.850 --rc genhtml_function_coverage=1 00:04:12.850 --rc genhtml_legend=1 00:04:12.850 --rc geninfo_all_blocks=1 00:04:12.850 --rc geninfo_unexecuted_blocks=1 00:04:12.850 00:04:12.850 ' 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.850 --rc genhtml_branch_coverage=1 00:04:12.850 --rc genhtml_function_coverage=1 00:04:12.850 --rc genhtml_legend=1 00:04:12.850 --rc geninfo_all_blocks=1 00:04:12.850 --rc geninfo_unexecuted_blocks=1 00:04:12.850 00:04:12.850 ' 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:12.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.850 --rc genhtml_branch_coverage=1 00:04:12.850 --rc genhtml_function_coverage=1 00:04:12.850 --rc genhtml_legend=1 00:04:12.850 --rc geninfo_all_blocks=1 00:04:12.850 --rc geninfo_unexecuted_blocks=1 00:04:12.850 00:04:12.850 ' 00:04:12.850 13:00:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:12.850 13:00:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:12.850 13:00:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:12.850 13:00:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.850 13:00:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.110 ************************************ 00:04:13.110 START TEST event_perf 00:04:13.110 ************************************ 00:04:13.110 13:00:54 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:13.110 Running I/O for 1 seconds...[2024-11-06 13:00:54.790429] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:13.110 [2024-11-06 13:00:54.790531] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489270 ] 00:04:13.110 [2024-11-06 13:00:54.877010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:13.110 [2024-11-06 13:00:54.912457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.110 [2024-11-06 13:00:54.912594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:13.110 [2024-11-06 13:00:54.912751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.110 [2024-11-06 13:00:54.912762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:14.052 Running I/O for 1 seconds... 00:04:14.052 lcore 0: 181720 00:04:14.052 lcore 1: 181723 00:04:14.052 lcore 2: 181721 00:04:14.052 lcore 3: 181719 00:04:14.052 done. 00:04:14.052 00:04:14.052 real 0m1.171s 00:04:14.052 user 0m4.090s 00:04:14.052 sys 0m0.079s 00:04:14.052 13:00:55 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.052 13:00:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:14.052 ************************************ 00:04:14.052 END TEST event_perf 00:04:14.052 ************************************ 00:04:14.313 13:00:55 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:14.313 13:00:55 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:14.313 13:00:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.313 13:00:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.313 ************************************ 00:04:14.313 START TEST event_reactor 00:04:14.313 ************************************ 00:04:14.314 13:00:56 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:14.314 [2024-11-06 13:00:56.039549] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:14.314 [2024-11-06 13:00:56.039651] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489534 ] 00:04:14.314 [2024-11-06 13:00:56.127799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.314 [2024-11-06 13:00:56.156983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.699 test_start 00:04:15.699 oneshot 00:04:15.699 tick 100 00:04:15.699 tick 100 00:04:15.699 tick 250 00:04:15.699 tick 100 00:04:15.699 tick 100 00:04:15.699 tick 250 00:04:15.699 tick 100 00:04:15.699 tick 500 00:04:15.699 tick 100 00:04:15.699 tick 100 00:04:15.699 tick 250 00:04:15.699 tick 100 00:04:15.699 tick 100 00:04:15.699 test_end 00:04:15.699 00:04:15.699 real 0m1.165s 00:04:15.699 user 0m1.083s 00:04:15.699 sys 0m0.077s 00:04:15.699 13:00:57 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.699 13:00:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:15.699 ************************************ 00:04:15.699 END TEST event_reactor 00:04:15.699 ************************************ 00:04:15.699 13:00:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:15.699 13:00:57 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:15.699 13:00:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.699 13:00:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.699 ************************************ 00:04:15.699 START TEST event_reactor_perf 00:04:15.699 ************************************ 00:04:15.699 13:00:57 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:15.699 [2024-11-06 13:00:57.283259] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:15.699 [2024-11-06 13:00:57.283363] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489886 ] 00:04:15.699 [2024-11-06 13:00:57.369918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.699 [2024-11-06 13:00:57.399500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.641 test_start 00:04:16.641 test_end 00:04:16.641 Performance: 534593 events per second 00:04:16.641 00:04:16.641 real 0m1.165s 00:04:16.641 user 0m1.081s 00:04:16.641 sys 0m0.080s 00:04:16.641 13:00:58 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.641 13:00:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:16.641 ************************************ 00:04:16.641 END TEST event_reactor_perf 00:04:16.641 ************************************ 00:04:16.641 13:00:58 event -- event/event.sh@49 -- # uname -s 00:04:16.641 13:00:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:16.641 13:00:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:16.641 13:00:58 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.641 13:00:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.641 13:00:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.641 ************************************ 00:04:16.641 START TEST event_scheduler 00:04:16.641 ************************************ 00:04:16.641 13:00:58 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:16.902 * Looking for test storage... 00:04:16.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:16.902 13:00:58 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:16.902 13:00:58 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:16.902 13:00:58 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.903 13:00:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.903 --rc genhtml_branch_coverage=1 00:04:16.903 --rc genhtml_function_coverage=1 00:04:16.903 --rc genhtml_legend=1 00:04:16.903 --rc geninfo_all_blocks=1 00:04:16.903 --rc geninfo_unexecuted_blocks=1 00:04:16.903 00:04:16.903 ' 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.903 --rc genhtml_branch_coverage=1 00:04:16.903 --rc genhtml_function_coverage=1 00:04:16.903 --rc genhtml_legend=1 00:04:16.903 --rc geninfo_all_blocks=1 00:04:16.903 --rc geninfo_unexecuted_blocks=1 00:04:16.903 00:04:16.903 ' 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.903 --rc genhtml_branch_coverage=1 00:04:16.903 --rc genhtml_function_coverage=1 00:04:16.903 --rc genhtml_legend=1 00:04:16.903 --rc geninfo_all_blocks=1 00:04:16.903 --rc geninfo_unexecuted_blocks=1 00:04:16.903 00:04:16.903 ' 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.903 --rc genhtml_branch_coverage=1 00:04:16.903 --rc genhtml_function_coverage=1 00:04:16.903 --rc genhtml_legend=1 00:04:16.903 --rc geninfo_all_blocks=1 00:04:16.903 --rc geninfo_unexecuted_blocks=1 00:04:16.903 00:04:16.903 ' 00:04:16.903 13:00:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:16.903 13:00:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1490276 00:04:16.903 13:00:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.903 13:00:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1490276 00:04:16.903 13:00:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 1490276 ']' 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:16.903 13:00:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:16.903 [2024-11-06 13:00:58.758393] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:16.903 [2024-11-06 13:00:58.758461] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490276 ] 00:04:17.164 [2024-11-06 13:00:58.852314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:17.164 [2024-11-06 13:00:58.907879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.164 [2024-11-06 13:00:58.908041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.164 [2024-11-06 13:00:58.908199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:17.164 [2024-11-06 13:00:58.908198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:17.735 13:00:59 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:17.735 13:00:59 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:17.735 13:00:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:17.735 13:00:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.735 13:00:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.735 [2024-11-06 13:00:59.578480] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:17.735 [2024-11-06 13:00:59.578499] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:17.735 [2024-11-06 13:00:59.578508] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:17.735 [2024-11-06 13:00:59.578514] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:17.735 [2024-11-06 13:00:59.578520] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:17.735 13:00:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.735 13:00:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:17.735 13:00:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.735 13:00:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 [2024-11-06 13:00:59.645291] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:17.997 13:00:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.997 13:00:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:17.997 13:00:59 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:17.997 13:00:59 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 ************************************ 00:04:17.997 START TEST scheduler_create_thread 00:04:17.997 ************************************ 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 2 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 3 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 4 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 5 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 6 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 7 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 8 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.997 9 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.997 13:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.569 10 00:04:18.569 13:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:18.569 13:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:18.569 13:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:18.569 13:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.955 13:01:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.955 13:01:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:19.955 13:01:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:19.955 13:01:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.955 13:01:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.525 13:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.525 13:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:20.525 13:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.525 13:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.465 13:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.465 13:01:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:21.465 13:01:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:21.465 13:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.465 13:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.035 13:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.035 00:04:22.035 real 0m4.223s 00:04:22.035 user 0m0.023s 00:04:22.035 sys 0m0.009s 00:04:22.035 13:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.035 13:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.035 ************************************ 00:04:22.035 END TEST scheduler_create_thread 00:04:22.035 ************************************ 00:04:22.295 13:01:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:22.295 13:01:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1490276 00:04:22.295 13:01:03 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 1490276 ']' 00:04:22.295 13:01:03 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 1490276 00:04:22.296 13:01:03 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:22.296 13:01:03 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:22.296 13:01:03 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1490276 00:04:22.296 13:01:04 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:22.296 13:01:04 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:22.296 13:01:04 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1490276' 00:04:22.296 killing process with pid 1490276 00:04:22.296 13:01:04 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 1490276 00:04:22.296 13:01:04 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 1490276 00:04:22.296 [2024-11-06 13:01:04.187051] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:22.556 00:04:22.556 real 0m5.833s 00:04:22.556 user 0m12.875s 00:04:22.556 sys 0m0.430s 00:04:22.556 13:01:04 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.556 13:01:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.556 ************************************ 00:04:22.556 END TEST event_scheduler 00:04:22.556 ************************************ 00:04:22.556 13:01:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:22.556 13:01:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:22.556 13:01:04 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.556 13:01:04 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.556 13:01:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.556 ************************************ 00:04:22.556 START TEST app_repeat 00:04:22.556 ************************************ 00:04:22.556 13:01:04 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1491343 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1491343' 00:04:22.556 Process app_repeat pid: 1491343 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:22.556 spdk_app_start Round 0 00:04:22.556 13:01:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1491343 /var/tmp/spdk-nbd.sock 00:04:22.556 13:01:04 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1491343 ']' 00:04:22.556 13:01:04 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.556 13:01:04 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:22.556 13:01:04 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.556 13:01:04 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:22.556 13:01:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.817 [2024-11-06 13:01:04.466758] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:22.817 [2024-11-06 13:01:04.466841] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491343 ] 00:04:22.817 [2024-11-06 13:01:04.551529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:22.817 [2024-11-06 13:01:04.582675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.817 [2024-11-06 13:01:04.582675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.817 13:01:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:22.817 13:01:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:22.817 13:01:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.078 Malloc0 00:04:23.078 13:01:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.338 Malloc1 00:04:23.338 13:01:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.338 /dev/nbd0 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.338 13:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.338 13:01:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:23.338 13:01:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:23.338 13:01:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:23.338 13:01:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:23.338 13:01:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:23.598 13:01:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:23.598 13:01:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:23.598 13:01:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:23.598 13:01:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.598 1+0 records in 00:04:23.598 1+0 records out 00:04:23.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277496 s, 14.8 MB/s 00:04:23.598 13:01:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.598 13:01:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:23.598 13:01:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.598 13:01:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:23.598 13:01:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:23.598 13:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.599 13:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.599 13:01:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.599 /dev/nbd1 00:04:23.599 13:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.599 13:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.599 1+0 records in 00:04:23.599 1+0 records out 00:04:23.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292959 s, 14.0 MB/s 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:23.599 13:01:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:23.599 13:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.599 13:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.599 13:01:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.599 13:01:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.599 13:01:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.860 { 00:04:23.860 "nbd_device": "/dev/nbd0", 00:04:23.860 "bdev_name": "Malloc0" 00:04:23.860 }, 00:04:23.860 { 00:04:23.860 "nbd_device": "/dev/nbd1", 00:04:23.860 "bdev_name": "Malloc1" 00:04:23.860 } 00:04:23.860 ]' 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.860 { 00:04:23.860 "nbd_device": "/dev/nbd0", 00:04:23.860 "bdev_name": "Malloc0" 00:04:23.860 }, 00:04:23.860 { 00:04:23.860 "nbd_device": "/dev/nbd1", 00:04:23.860 "bdev_name": "Malloc1" 00:04:23.860 } 00:04:23.860 ]' 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.860 /dev/nbd1' 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.860 /dev/nbd1' 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.860 256+0 records in 00:04:23.860 256+0 records out 00:04:23.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118439 s, 88.5 MB/s 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.860 256+0 records in 00:04:23.860 256+0 records out 00:04:23.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011985 s, 87.5 MB/s 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.860 256+0 records in 00:04:23.860 256+0 records out 00:04:23.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012937 s, 81.1 MB/s 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.860 13:01:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.121 13:01:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.382 13:01:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.643 13:01:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.643 13:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.644 13:01:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.644 13:01:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.906 13:01:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:24.906 [2024-11-06 13:01:06.666440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.906 [2024-11-06 13:01:06.695486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.906 [2024-11-06 13:01:06.695486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.906 [2024-11-06 13:01:06.724945] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:24.906 [2024-11-06 13:01:06.724973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.206 13:01:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.206 13:01:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:28.206 spdk_app_start Round 1 00:04:28.206 13:01:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1491343 /var/tmp/spdk-nbd.sock 00:04:28.206 13:01:09 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1491343 ']' 00:04:28.206 13:01:09 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.206 13:01:09 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:28.206 13:01:09 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.206 13:01:09 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:28.206 13:01:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.206 13:01:09 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:28.206 13:01:09 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:28.206 13:01:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.206 Malloc0 00:04:28.207 13:01:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.468 Malloc1 00:04:28.468 13:01:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:28.468 /dev/nbd0 00:04:28.468 13:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:28.728 13:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:28.728 13:01:10 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:28.728 13:01:10 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:28.728 13:01:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:28.728 13:01:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:28.728 13:01:10 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:28.728 13:01:10 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:28.728 13:01:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:28.728 13:01:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:28.728 13:01:10 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.728 1+0 records in 00:04:28.728 1+0 records out 00:04:28.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016927 s, 24.2 MB/s 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:28.729 /dev/nbd1 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.729 1+0 records in 00:04:28.729 1+0 records out 00:04:28.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283152 s, 14.5 MB/s 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:28.729 13:01:10 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.729 13:01:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:28.990 { 00:04:28.990 "nbd_device": "/dev/nbd0", 00:04:28.990 "bdev_name": "Malloc0" 00:04:28.990 }, 00:04:28.990 { 00:04:28.990 "nbd_device": "/dev/nbd1", 00:04:28.990 "bdev_name": "Malloc1" 00:04:28.990 } 00:04:28.990 ]' 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:28.990 { 00:04:28.990 "nbd_device": "/dev/nbd0", 00:04:28.990 "bdev_name": "Malloc0" 00:04:28.990 }, 00:04:28.990 { 00:04:28.990 "nbd_device": "/dev/nbd1", 00:04:28.990 "bdev_name": "Malloc1" 00:04:28.990 } 00:04:28.990 ]' 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:28.990 /dev/nbd1' 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:28.990 /dev/nbd1' 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:28.990 256+0 records in 00:04:28.990 256+0 records out 00:04:28.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124792 s, 84.0 MB/s 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:28.990 13:01:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:29.251 256+0 records in 00:04:29.251 256+0 records out 00:04:29.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121032 s, 86.6 MB/s 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:29.251 256+0 records in 00:04:29.251 256+0 records out 00:04:29.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130751 s, 80.2 MB/s 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.251 13:01:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.251 13:01:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.512 13:01:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:29.772 13:01:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:29.772 13:01:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:30.032 13:01:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:30.032 [2024-11-06 13:01:11.804348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.032 [2024-11-06 13:01:11.832782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.032 [2024-11-06 13:01:11.832805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.032 [2024-11-06 13:01:11.862728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:30.032 [2024-11-06 13:01:11.862763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:33.333 13:01:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.333 13:01:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:33.333 spdk_app_start Round 2 00:04:33.333 13:01:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1491343 /var/tmp/spdk-nbd.sock 00:04:33.333 13:01:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1491343 ']' 00:04:33.333 13:01:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.333 13:01:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:33.333 13:01:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.333 13:01:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:33.333 13:01:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.333 13:01:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:33.333 13:01:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:33.333 13:01:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.333 Malloc0 00:04:33.333 13:01:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.594 Malloc1 00:04:33.594 13:01:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.594 /dev/nbd0 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.594 1+0 records in 00:04:33.594 1+0 records out 00:04:33.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275254 s, 14.9 MB/s 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:33.594 13:01:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.594 13:01:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:33.856 /dev/nbd1 00:04:33.856 13:01:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:33.856 13:01:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.856 1+0 records in 00:04:33.856 1+0 records out 00:04:33.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274934 s, 14.9 MB/s 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:33.856 13:01:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:33.856 13:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.856 13:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.856 13:01:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.856 13:01:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.856 13:01:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.118 { 00:04:34.118 "nbd_device": "/dev/nbd0", 00:04:34.118 "bdev_name": "Malloc0" 00:04:34.118 }, 00:04:34.118 { 00:04:34.118 "nbd_device": "/dev/nbd1", 00:04:34.118 "bdev_name": "Malloc1" 00:04:34.118 } 00:04:34.118 ]' 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.118 { 00:04:34.118 "nbd_device": "/dev/nbd0", 00:04:34.118 "bdev_name": "Malloc0" 00:04:34.118 }, 00:04:34.118 { 00:04:34.118 "nbd_device": "/dev/nbd1", 00:04:34.118 "bdev_name": "Malloc1" 00:04:34.118 } 00:04:34.118 ]' 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.118 /dev/nbd1' 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.118 /dev/nbd1' 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.118 256+0 records in 00:04:34.118 256+0 records out 00:04:34.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127369 s, 82.3 MB/s 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.118 256+0 records in 00:04:34.118 256+0 records out 00:04:34.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120035 s, 87.4 MB/s 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.118 13:01:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.118 256+0 records in 00:04:34.118 256+0 records out 00:04:34.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128061 s, 81.9 MB/s 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.118 13:01:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.380 13:01:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.641 13:01:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:34.908 13:01:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:34.908 13:01:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:34.908 13:01:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.206 [2024-11-06 13:01:16.901415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.206 [2024-11-06 13:01:16.930757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.206 [2024-11-06 13:01:16.930770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.206 [2024-11-06 13:01:16.960171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:35.206 [2024-11-06 13:01:16.960211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.526 13:01:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1491343 /var/tmp/spdk-nbd.sock 00:04:38.526 13:01:19 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1491343 ']' 00:04:38.526 13:01:19 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.526 13:01:19 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:38.526 13:01:19 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.526 13:01:19 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:38.526 13:01:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:38.527 13:01:20 event.app_repeat -- event/event.sh@39 -- # killprocess 1491343 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 1491343 ']' 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 1491343 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1491343 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1491343' 00:04:38.527 killing process with pid 1491343 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@971 -- # kill 1491343 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@976 -- # wait 1491343 00:04:38.527 spdk_app_start is called in Round 0. 00:04:38.527 Shutdown signal received, stop current app iteration 00:04:38.527 Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 reinitialization... 00:04:38.527 spdk_app_start is called in Round 1. 00:04:38.527 Shutdown signal received, stop current app iteration 00:04:38.527 Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 reinitialization... 00:04:38.527 spdk_app_start is called in Round 2. 00:04:38.527 Shutdown signal received, stop current app iteration 00:04:38.527 Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 reinitialization... 00:04:38.527 spdk_app_start is called in Round 3. 00:04:38.527 Shutdown signal received, stop current app iteration 00:04:38.527 13:01:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:38.527 13:01:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:38.527 00:04:38.527 real 0m15.730s 00:04:38.527 user 0m34.513s 00:04:38.527 sys 0m2.259s 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:38.527 13:01:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.527 ************************************ 00:04:38.527 END TEST app_repeat 00:04:38.527 ************************************ 00:04:38.527 13:01:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:38.527 13:01:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:38.527 13:01:20 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:38.527 13:01:20 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:38.527 13:01:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.527 ************************************ 00:04:38.527 START TEST cpu_locks 00:04:38.527 ************************************ 00:04:38.527 13:01:20 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:38.527 * Looking for test storage... 00:04:38.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:38.527 13:01:20 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:38.527 13:01:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:38.527 13:01:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:38.527 13:01:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:38.527 13:01:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.789 13:01:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:38.789 13:01:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:38.789 13:01:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.789 13:01:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:38.789 13:01:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.789 13:01:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.789 13:01:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.789 13:01:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:38.789 13:01:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.789 13:01:20 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.789 --rc genhtml_branch_coverage=1 00:04:38.789 --rc genhtml_function_coverage=1 00:04:38.789 --rc genhtml_legend=1 00:04:38.789 --rc geninfo_all_blocks=1 00:04:38.789 --rc geninfo_unexecuted_blocks=1 00:04:38.789 00:04:38.789 ' 00:04:38.789 13:01:20 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.789 --rc genhtml_branch_coverage=1 00:04:38.789 --rc genhtml_function_coverage=1 00:04:38.789 --rc genhtml_legend=1 00:04:38.789 --rc geninfo_all_blocks=1 00:04:38.789 --rc geninfo_unexecuted_blocks=1 00:04:38.789 00:04:38.789 ' 00:04:38.789 13:01:20 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.789 --rc genhtml_branch_coverage=1 00:04:38.789 --rc genhtml_function_coverage=1 00:04:38.789 --rc genhtml_legend=1 00:04:38.789 --rc geninfo_all_blocks=1 00:04:38.789 --rc geninfo_unexecuted_blocks=1 00:04:38.789 00:04:38.789 ' 00:04:38.789 13:01:20 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.789 --rc genhtml_branch_coverage=1 00:04:38.789 --rc genhtml_function_coverage=1 00:04:38.789 --rc genhtml_legend=1 00:04:38.789 --rc geninfo_all_blocks=1 00:04:38.789 --rc geninfo_unexecuted_blocks=1 00:04:38.789 00:04:38.789 ' 00:04:38.789 13:01:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:38.789 13:01:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:38.789 13:01:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:38.789 13:01:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:38.789 13:01:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:38.789 13:01:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:38.789 13:01:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.789 ************************************ 00:04:38.789 START TEST default_locks 00:04:38.789 ************************************ 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1494857 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1494857 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1494857 ']' 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:38.789 13:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.789 [2024-11-06 13:01:20.537483] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:38.789 [2024-11-06 13:01:20.537535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494857 ] 00:04:38.789 [2024-11-06 13:01:20.622535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.789 [2024-11-06 13:01:20.660816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.731 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:39.731 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:39.731 13:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1494857 00:04:39.731 13:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1494857 00:04:39.731 13:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.993 lslocks: write error 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1494857 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 1494857 ']' 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 1494857 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1494857 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1494857' 00:04:39.993 killing process with pid 1494857 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 1494857 00:04:39.993 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 1494857 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1494857 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1494857 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1494857 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1494857 ']' 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1494857) - No such process 00:04:40.255 ERROR: process (pid: 1494857) is no longer running 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:40.255 00:04:40.255 real 0m1.511s 00:04:40.255 user 0m1.639s 00:04:40.255 sys 0m0.519s 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.255 13:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.255 ************************************ 00:04:40.255 END TEST default_locks 00:04:40.255 ************************************ 00:04:40.255 13:01:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:40.255 13:01:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.255 13:01:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.255 13:01:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.255 ************************************ 00:04:40.255 START TEST default_locks_via_rpc 00:04:40.255 ************************************ 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1495170 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1495170 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1495170 ']' 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.255 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.255 [2024-11-06 13:01:22.122524] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:40.255 [2024-11-06 13:01:22.122579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495170 ] 00:04:40.518 [2024-11-06 13:01:22.207804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.518 [2024-11-06 13:01:22.244098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1495170 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1495170 00:04:41.090 13:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1495170 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 1495170 ']' 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 1495170 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1495170 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1495170' 00:04:41.661 killing process with pid 1495170 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 1495170 00:04:41.661 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 1495170 00:04:41.922 00:04:41.922 real 0m1.537s 00:04:41.922 user 0m1.675s 00:04:41.922 sys 0m0.520s 00:04:41.922 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.922 13:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.922 ************************************ 00:04:41.922 END TEST default_locks_via_rpc 00:04:41.922 ************************************ 00:04:41.922 13:01:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:41.922 13:01:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.922 13:01:23 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.922 13:01:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.922 ************************************ 00:04:41.922 START TEST non_locking_app_on_locked_coremask 00:04:41.922 ************************************ 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1495502 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1495502 /var/tmp/spdk.sock 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1495502 ']' 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.922 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.922 [2024-11-06 13:01:23.736148] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:41.922 [2024-11-06 13:01:23.736203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495502 ] 00:04:41.922 [2024-11-06 13:01:23.822851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.182 [2024-11-06 13:01:23.857521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1495681 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1495681 /var/tmp/spdk2.sock 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1495681 ']' 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.753 13:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.753 [2024-11-06 13:01:24.556779] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:42.753 [2024-11-06 13:01:24.556829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495681 ] 00:04:42.753 [2024-11-06 13:01:24.643768] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.753 [2024-11-06 13:01:24.643789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.013 [2024-11-06 13:01:24.702104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.584 13:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.584 13:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:43.584 13:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1495502 00:04:43.584 13:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1495502 00:04:43.584 13:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.157 lslocks: write error 00:04:44.157 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1495502 00:04:44.157 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1495502 ']' 00:04:44.157 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1495502 00:04:44.157 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:44.157 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.157 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1495502 00:04:44.418 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.418 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.418 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1495502' 00:04:44.418 killing process with pid 1495502 00:04:44.418 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1495502 00:04:44.418 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1495502 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1495681 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1495681 ']' 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1495681 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1495681 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1495681' 00:04:44.678 killing process with pid 1495681 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1495681 00:04:44.678 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1495681 00:04:44.939 00:04:44.939 real 0m3.015s 00:04:44.939 user 0m3.324s 00:04:44.939 sys 0m0.935s 00:04:44.939 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.939 13:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.939 ************************************ 00:04:44.939 END TEST non_locking_app_on_locked_coremask 00:04:44.939 ************************************ 00:04:44.939 13:01:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:44.939 13:01:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.939 13:01:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.939 13:01:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.939 ************************************ 00:04:44.939 START TEST locking_app_on_unlocked_coremask 00:04:44.939 ************************************ 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1496066 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1496066 /var/tmp/spdk.sock 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1496066 ']' 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.939 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.939 [2024-11-06 13:01:26.826383] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:44.939 [2024-11-06 13:01:26.826432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496066 ] 00:04:45.200 [2024-11-06 13:01:26.888881] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:45.200 [2024-11-06 13:01:26.888903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.200 [2024-11-06 13:01:26.917981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.200 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:45.200 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:45.200 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1496218 00:04:45.200 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1496218 /var/tmp/spdk2.sock 00:04:45.200 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1496218 ']' 00:04:45.200 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:45.200 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.460 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.460 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.460 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.460 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.460 [2024-11-06 13:01:27.156510] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:45.460 [2024-11-06 13:01:27.156559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496218 ] 00:04:45.460 [2024-11-06 13:01:27.243035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.460 [2024-11-06 13:01:27.305799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.401 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:46.401 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:46.401 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1496218 00:04:46.401 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1496218 00:04:46.401 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.662 lslocks: write error 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1496066 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1496066 ']' 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1496066 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1496066 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1496066' 00:04:46.662 killing process with pid 1496066 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1496066 00:04:46.662 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1496066 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1496218 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1496218 ']' 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1496218 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1496218 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1496218' 00:04:47.234 killing process with pid 1496218 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1496218 00:04:47.234 13:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1496218 00:04:47.494 00:04:47.494 real 0m2.401s 00:04:47.494 user 0m2.645s 00:04:47.494 sys 0m0.861s 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.494 ************************************ 00:04:47.494 END TEST locking_app_on_unlocked_coremask 00:04:47.494 ************************************ 00:04:47.494 13:01:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:47.494 13:01:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.494 13:01:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.494 13:01:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.494 ************************************ 00:04:47.494 START TEST locking_app_on_locked_coremask 00:04:47.494 ************************************ 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1496755 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1496755 /var/tmp/spdk.sock 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1496755 ']' 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.494 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.494 [2024-11-06 13:01:29.303550] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:47.494 [2024-11-06 13:01:29.303600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496755 ] 00:04:47.494 [2024-11-06 13:01:29.388334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.755 [2024-11-06 13:01:29.419806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1496774 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1496774 /var/tmp/spdk2.sock 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1496774 /var/tmp/spdk2.sock 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.327 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:48.328 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.328 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1496774 /var/tmp/spdk2.sock 00:04:48.328 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1496774 ']' 00:04:48.328 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.328 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.328 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.328 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.328 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.328 [2024-11-06 13:01:30.120281] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:48.328 [2024-11-06 13:01:30.120334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496774 ] 00:04:48.328 [2024-11-06 13:01:30.209071] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1496755 has claimed it. 00:04:48.328 [2024-11-06 13:01:30.209106] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:48.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1496774) - No such process 00:04:48.899 ERROR: process (pid: 1496774) is no longer running 00:04:48.899 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.899 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:48.899 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:48.899 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.899 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.899 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.899 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1496755 00:04:48.899 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1496755 00:04:48.899 13:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.470 lslocks: write error 00:04:49.470 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1496755 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1496755 ']' 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1496755 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1496755 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1496755' 00:04:49.471 killing process with pid 1496755 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1496755 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1496755 00:04:49.471 00:04:49.471 real 0m2.101s 00:04:49.471 user 0m2.355s 00:04:49.471 sys 0m0.586s 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.471 13:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.471 ************************************ 00:04:49.471 END TEST locking_app_on_locked_coremask 00:04:49.471 ************************************ 00:04:49.731 13:01:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:49.731 13:01:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.731 13:01:31 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.731 13:01:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.731 ************************************ 00:04:49.731 START TEST locking_overlapped_coremask 00:04:49.731 ************************************ 00:04:49.731 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:49.731 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1497139 00:04:49.731 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1497139 /var/tmp/spdk.sock 00:04:49.731 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:49.731 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1497139 ']' 00:04:49.732 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.732 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.732 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.732 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.732 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.732 [2024-11-06 13:01:31.477283] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:49.732 [2024-11-06 13:01:31.477338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497139 ] 00:04:49.732 [2024-11-06 13:01:31.563965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:49.732 [2024-11-06 13:01:31.599835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.732 [2024-11-06 13:01:31.600084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.732 [2024-11-06 13:01:31.600084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.674 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.674 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1497348 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1497348 /var/tmp/spdk2.sock 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1497348 /var/tmp/spdk2.sock 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1497348 /var/tmp/spdk2.sock 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1497348 ']' 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.675 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.675 [2024-11-06 13:01:32.333359] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:50.675 [2024-11-06 13:01:32.333413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497348 ] 00:04:50.675 [2024-11-06 13:01:32.445858] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1497139 has claimed it. 00:04:50.675 [2024-11-06 13:01:32.445902] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:51.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1497348) - No such process 00:04:51.246 ERROR: process (pid: 1497348) is no longer running 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1497139 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 1497139 ']' 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 1497139 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.246 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1497139 00:04:51.246 13:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:51.246 13:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:51.246 13:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1497139' 00:04:51.246 killing process with pid 1497139 00:04:51.246 13:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 1497139 00:04:51.246 13:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 1497139 00:04:51.506 00:04:51.506 real 0m1.781s 00:04:51.506 user 0m5.162s 00:04:51.506 sys 0m0.392s 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.506 ************************************ 00:04:51.506 END TEST locking_overlapped_coremask 00:04:51.506 ************************************ 00:04:51.506 13:01:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:51.506 13:01:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.506 13:01:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.506 13:01:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.506 ************************************ 00:04:51.506 START TEST locking_overlapped_coremask_via_rpc 00:04:51.506 ************************************ 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1497509 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1497509 /var/tmp/spdk.sock 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1497509 ']' 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.506 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.506 [2024-11-06 13:01:33.335433] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:51.506 [2024-11-06 13:01:33.335482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497509 ] 00:04:51.767 [2024-11-06 13:01:33.419737] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:51.767 [2024-11-06 13:01:33.419766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:51.767 [2024-11-06 13:01:33.451303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.767 [2024-11-06 13:01:33.451448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.767 [2024-11-06 13:01:33.451450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1497795 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1497795 /var/tmp/spdk2.sock 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1497795 ']' 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:52.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.339 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.339 [2024-11-06 13:01:34.184214] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:52.339 [2024-11-06 13:01:34.184269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497795 ] 00:04:52.600 [2024-11-06 13:01:34.296743] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.600 [2024-11-06 13:01:34.296778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:52.600 [2024-11-06 13:01:34.374237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.600 [2024-11-06 13:01:34.374392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.600 [2024-11-06 13:01:34.374394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.171 [2024-11-06 13:01:34.990824] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1497509 has claimed it. 00:04:53.171 request: 00:04:53.171 { 00:04:53.171 "method": "framework_enable_cpumask_locks", 00:04:53.171 "req_id": 1 00:04:53.171 } 00:04:53.171 Got JSON-RPC error response 00:04:53.171 response: 00:04:53.171 { 00:04:53.171 "code": -32603, 00:04:53.171 "message": "Failed to claim CPU core: 2" 00:04:53.171 } 00:04:53.171 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1497509 /var/tmp/spdk.sock 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1497509 ']' 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.171 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.432 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.433 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:53.433 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1497795 /var/tmp/spdk2.sock 00:04:53.433 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1497795 ']' 00:04:53.433 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.433 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.433 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.433 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.433 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.693 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.693 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:53.693 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:53.693 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:53.693 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:53.693 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:53.693 00:04:53.693 real 0m2.087s 00:04:53.693 user 0m0.848s 00:04:53.693 sys 0m0.161s 00:04:53.693 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.693 13:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.694 ************************************ 00:04:53.694 END TEST locking_overlapped_coremask_via_rpc 00:04:53.694 ************************************ 00:04:53.694 13:01:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:53.694 13:01:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1497509 ]] 00:04:53.694 13:01:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1497509 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1497509 ']' 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1497509 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1497509 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1497509' 00:04:53.694 killing process with pid 1497509 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1497509 00:04:53.694 13:01:35 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1497509 00:04:53.955 13:01:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1497795 ]] 00:04:53.955 13:01:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1497795 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1497795 ']' 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1497795 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1497795 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1497795' 00:04:53.955 killing process with pid 1497795 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1497795 00:04:53.955 13:01:35 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1497795 00:04:54.216 13:01:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:54.216 13:01:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:54.216 13:01:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1497509 ]] 00:04:54.216 13:01:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1497509 00:04:54.216 13:01:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1497509 ']' 00:04:54.216 13:01:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1497509 00:04:54.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1497509) - No such process 00:04:54.216 13:01:35 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1497509 is not found' 00:04:54.216 Process with pid 1497509 is not found 00:04:54.216 13:01:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1497795 ]] 00:04:54.216 13:01:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1497795 00:04:54.216 13:01:35 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1497795 ']' 00:04:54.216 13:01:35 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1497795 00:04:54.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1497795) - No such process 00:04:54.216 13:01:35 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1497795 is not found' 00:04:54.216 Process with pid 1497795 is not found 00:04:54.216 13:01:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:54.216 00:04:54.216 real 0m15.693s 00:04:54.216 user 0m27.687s 00:04:54.216 sys 0m4.926s 00:04:54.216 13:01:35 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.216 13:01:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.216 ************************************ 00:04:54.216 END TEST cpu_locks 00:04:54.216 ************************************ 00:04:54.216 00:04:54.216 real 0m41.437s 00:04:54.216 user 1m21.642s 00:04:54.216 sys 0m8.256s 00:04:54.216 13:01:35 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.216 13:01:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.216 ************************************ 00:04:54.216 END TEST event 00:04:54.216 ************************************ 00:04:54.216 13:01:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:54.216 13:01:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:54.216 13:01:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.216 13:01:36 -- common/autotest_common.sh@10 -- # set +x 00:04:54.216 ************************************ 00:04:54.216 START TEST thread 00:04:54.216 ************************************ 00:04:54.216 13:01:36 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:54.478 * Looking for test storage... 00:04:54.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.478 13:01:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.478 13:01:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.478 13:01:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.478 13:01:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.478 13:01:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.478 13:01:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.478 13:01:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.478 13:01:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.478 13:01:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.478 13:01:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.478 13:01:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.478 13:01:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:54.478 13:01:36 thread -- scripts/common.sh@345 -- # : 1 00:04:54.478 13:01:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.478 13:01:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.478 13:01:36 thread -- scripts/common.sh@365 -- # decimal 1 00:04:54.478 13:01:36 thread -- scripts/common.sh@353 -- # local d=1 00:04:54.478 13:01:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.478 13:01:36 thread -- scripts/common.sh@355 -- # echo 1 00:04:54.478 13:01:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.478 13:01:36 thread -- scripts/common.sh@366 -- # decimal 2 00:04:54.478 13:01:36 thread -- scripts/common.sh@353 -- # local d=2 00:04:54.478 13:01:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.478 13:01:36 thread -- scripts/common.sh@355 -- # echo 2 00:04:54.478 13:01:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.478 13:01:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.478 13:01:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.478 13:01:36 thread -- scripts/common.sh@368 -- # return 0 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.478 --rc genhtml_branch_coverage=1 00:04:54.478 --rc genhtml_function_coverage=1 00:04:54.478 --rc genhtml_legend=1 00:04:54.478 --rc geninfo_all_blocks=1 00:04:54.478 --rc geninfo_unexecuted_blocks=1 00:04:54.478 00:04:54.478 ' 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.478 --rc genhtml_branch_coverage=1 00:04:54.478 --rc genhtml_function_coverage=1 00:04:54.478 --rc genhtml_legend=1 00:04:54.478 --rc geninfo_all_blocks=1 00:04:54.478 --rc geninfo_unexecuted_blocks=1 00:04:54.478 00:04:54.478 ' 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.478 --rc genhtml_branch_coverage=1 00:04:54.478 --rc genhtml_function_coverage=1 00:04:54.478 --rc genhtml_legend=1 00:04:54.478 --rc geninfo_all_blocks=1 00:04:54.478 --rc geninfo_unexecuted_blocks=1 00:04:54.478 00:04:54.478 ' 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.478 --rc genhtml_branch_coverage=1 00:04:54.478 --rc genhtml_function_coverage=1 00:04:54.478 --rc genhtml_legend=1 00:04:54.478 --rc geninfo_all_blocks=1 00:04:54.478 --rc geninfo_unexecuted_blocks=1 00:04:54.478 00:04:54.478 ' 00:04:54.478 13:01:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.478 13:01:36 thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.478 ************************************ 00:04:54.478 START TEST thread_poller_perf 00:04:54.478 ************************************ 00:04:54.478 13:01:36 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:54.478 [2024-11-06 13:01:36.293324] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:54.478 [2024-11-06 13:01:36.293429] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498289 ] 00:04:54.738 [2024-11-06 13:01:36.384552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.738 [2024-11-06 13:01:36.419641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.738 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:55.681 [2024-11-06T12:01:37.583Z] ====================================== 00:04:55.681 [2024-11-06T12:01:37.583Z] busy:2406872880 (cyc) 00:04:55.681 [2024-11-06T12:01:37.583Z] total_run_count: 418000 00:04:55.681 [2024-11-06T12:01:37.583Z] tsc_hz: 2400000000 (cyc) 00:04:55.681 [2024-11-06T12:01:37.583Z] ====================================== 00:04:55.681 [2024-11-06T12:01:37.583Z] poller_cost: 5758 (cyc), 2399 (nsec) 00:04:55.681 00:04:55.681 real 0m1.181s 00:04:55.681 user 0m1.099s 00:04:55.681 sys 0m0.078s 00:04:55.681 13:01:37 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.681 13:01:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.681 ************************************ 00:04:55.681 END TEST thread_poller_perf 00:04:55.681 ************************************ 00:04:55.681 13:01:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:55.681 13:01:37 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:55.681 13:01:37 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.681 13:01:37 thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.681 ************************************ 00:04:55.681 START TEST thread_poller_perf 00:04:55.681 ************************************ 00:04:55.681 13:01:37 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:55.681 [2024-11-06 13:01:37.547931] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:55.681 [2024-11-06 13:01:37.548038] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498588 ] 00:04:55.942 [2024-11-06 13:01:37.637828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.942 [2024-11-06 13:01:37.674734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.942 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:56.886 [2024-11-06T12:01:38.788Z] ====================================== 00:04:56.886 [2024-11-06T12:01:38.788Z] busy:2401355750 (cyc) 00:04:56.886 [2024-11-06T12:01:38.788Z] total_run_count: 5569000 00:04:56.886 [2024-11-06T12:01:38.788Z] tsc_hz: 2400000000 (cyc) 00:04:56.886 [2024-11-06T12:01:38.788Z] ====================================== 00:04:56.886 [2024-11-06T12:01:38.788Z] poller_cost: 431 (cyc), 179 (nsec) 00:04:56.886 00:04:56.886 real 0m1.174s 00:04:56.886 user 0m1.090s 00:04:56.886 sys 0m0.081s 00:04:56.886 13:01:38 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.886 13:01:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.886 ************************************ 00:04:56.886 END TEST thread_poller_perf 00:04:56.886 ************************************ 00:04:56.886 13:01:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:56.886 00:04:56.886 real 0m2.698s 00:04:56.886 user 0m2.359s 00:04:56.886 sys 0m0.353s 00:04:56.886 13:01:38 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.886 13:01:38 thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.886 ************************************ 00:04:56.886 END TEST thread 00:04:56.886 ************************************ 00:04:56.886 13:01:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:56.886 13:01:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:56.886 13:01:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.886 13:01:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.886 13:01:38 -- common/autotest_common.sh@10 -- # set +x 00:04:57.147 ************************************ 00:04:57.147 START TEST app_cmdline 00:04:57.147 ************************************ 00:04:57.147 13:01:38 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:57.147 * Looking for test storage... 00:04:57.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:57.147 13:01:38 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.147 13:01:38 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.147 13:01:38 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.147 13:01:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.147 --rc genhtml_branch_coverage=1 00:04:57.147 --rc genhtml_function_coverage=1 00:04:57.147 --rc genhtml_legend=1 00:04:57.147 --rc geninfo_all_blocks=1 00:04:57.147 --rc geninfo_unexecuted_blocks=1 00:04:57.147 00:04:57.147 ' 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.147 --rc genhtml_branch_coverage=1 00:04:57.147 --rc genhtml_function_coverage=1 00:04:57.147 --rc genhtml_legend=1 00:04:57.147 --rc geninfo_all_blocks=1 00:04:57.147 --rc geninfo_unexecuted_blocks=1 00:04:57.147 00:04:57.147 ' 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.147 --rc genhtml_branch_coverage=1 00:04:57.147 --rc genhtml_function_coverage=1 00:04:57.147 --rc genhtml_legend=1 00:04:57.147 --rc geninfo_all_blocks=1 00:04:57.147 --rc geninfo_unexecuted_blocks=1 00:04:57.147 00:04:57.147 ' 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.147 --rc genhtml_branch_coverage=1 00:04:57.147 --rc genhtml_function_coverage=1 00:04:57.147 --rc genhtml_legend=1 00:04:57.147 --rc geninfo_all_blocks=1 00:04:57.147 --rc geninfo_unexecuted_blocks=1 00:04:57.147 00:04:57.147 ' 00:04:57.147 13:01:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:57.147 13:01:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1498887 00:04:57.147 13:01:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1498887 00:04:57.147 13:01:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 1498887 ']' 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.147 13:01:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:57.409 [2024-11-06 13:01:39.093498] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:04:57.409 [2024-11-06 13:01:39.093572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498887 ] 00:04:57.409 [2024-11-06 13:01:39.179363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.409 [2024-11-06 13:01:39.214580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.351 13:01:39 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.351 13:01:39 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:04:58.351 13:01:39 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:58.351 { 00:04:58.351 "version": "SPDK v25.01-pre git sha1 adaafacab", 00:04:58.351 "fields": { 00:04:58.351 "major": 25, 00:04:58.351 "minor": 1, 00:04:58.351 "patch": 0, 00:04:58.351 "suffix": "-pre", 00:04:58.351 "commit": "adaafacab" 00:04:58.351 } 00:04:58.351 } 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:58.351 13:01:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:58.351 13:01:40 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:58.611 request: 00:04:58.611 { 00:04:58.611 "method": "env_dpdk_get_mem_stats", 00:04:58.611 "req_id": 1 00:04:58.611 } 00:04:58.611 Got JSON-RPC error response 00:04:58.611 response: 00:04:58.611 { 00:04:58.611 "code": -32601, 00:04:58.611 "message": "Method not found" 00:04:58.611 } 00:04:58.611 13:01:40 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:58.611 13:01:40 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.612 13:01:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1498887 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 1498887 ']' 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 1498887 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1498887 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1498887' 00:04:58.612 killing process with pid 1498887 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@971 -- # kill 1498887 00:04:58.612 13:01:40 app_cmdline -- common/autotest_common.sh@976 -- # wait 1498887 00:04:58.873 00:04:58.873 real 0m1.713s 00:04:58.873 user 0m2.034s 00:04:58.873 sys 0m0.471s 00:04:58.873 13:01:40 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.873 13:01:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:58.873 ************************************ 00:04:58.873 END TEST app_cmdline 00:04:58.873 ************************************ 00:04:58.873 13:01:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:58.873 13:01:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.873 13:01:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.873 13:01:40 -- common/autotest_common.sh@10 -- # set +x 00:04:58.873 ************************************ 00:04:58.873 START TEST version 00:04:58.873 ************************************ 00:04:58.873 13:01:40 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:58.873 * Looking for test storage... 00:04:58.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:58.873 13:01:40 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.873 13:01:40 version -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.873 13:01:40 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.134 13:01:40 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.134 13:01:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.134 13:01:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.134 13:01:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.134 13:01:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.134 13:01:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.134 13:01:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.134 13:01:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.134 13:01:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.134 13:01:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.134 13:01:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.134 13:01:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.134 13:01:40 version -- scripts/common.sh@344 -- # case "$op" in 00:04:59.134 13:01:40 version -- scripts/common.sh@345 -- # : 1 00:04:59.134 13:01:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.134 13:01:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.134 13:01:40 version -- scripts/common.sh@365 -- # decimal 1 00:04:59.134 13:01:40 version -- scripts/common.sh@353 -- # local d=1 00:04:59.134 13:01:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.134 13:01:40 version -- scripts/common.sh@355 -- # echo 1 00:04:59.134 13:01:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.134 13:01:40 version -- scripts/common.sh@366 -- # decimal 2 00:04:59.134 13:01:40 version -- scripts/common.sh@353 -- # local d=2 00:04:59.134 13:01:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.134 13:01:40 version -- scripts/common.sh@355 -- # echo 2 00:04:59.134 13:01:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.134 13:01:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.134 13:01:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.134 13:01:40 version -- scripts/common.sh@368 -- # return 0 00:04:59.134 13:01:40 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.134 13:01:40 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.134 --rc genhtml_branch_coverage=1 00:04:59.134 --rc genhtml_function_coverage=1 00:04:59.134 --rc genhtml_legend=1 00:04:59.134 --rc geninfo_all_blocks=1 00:04:59.134 --rc geninfo_unexecuted_blocks=1 00:04:59.134 00:04:59.134 ' 00:04:59.134 13:01:40 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.134 --rc genhtml_branch_coverage=1 00:04:59.134 --rc genhtml_function_coverage=1 00:04:59.134 --rc genhtml_legend=1 00:04:59.134 --rc geninfo_all_blocks=1 00:04:59.134 --rc geninfo_unexecuted_blocks=1 00:04:59.134 00:04:59.134 ' 00:04:59.134 13:01:40 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.134 --rc genhtml_branch_coverage=1 00:04:59.134 --rc genhtml_function_coverage=1 00:04:59.134 --rc genhtml_legend=1 00:04:59.134 --rc geninfo_all_blocks=1 00:04:59.134 --rc geninfo_unexecuted_blocks=1 00:04:59.134 00:04:59.134 ' 00:04:59.134 13:01:40 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.134 --rc genhtml_branch_coverage=1 00:04:59.134 --rc genhtml_function_coverage=1 00:04:59.134 --rc genhtml_legend=1 00:04:59.134 --rc geninfo_all_blocks=1 00:04:59.134 --rc geninfo_unexecuted_blocks=1 00:04:59.134 00:04:59.134 ' 00:04:59.134 13:01:40 version -- app/version.sh@17 -- # get_header_version major 00:04:59.134 13:01:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:59.134 13:01:40 version -- app/version.sh@14 -- # cut -f2 00:04:59.134 13:01:40 version -- app/version.sh@14 -- # tr -d '"' 00:04:59.134 13:01:40 version -- app/version.sh@17 -- # major=25 00:04:59.134 13:01:40 version -- app/version.sh@18 -- # get_header_version minor 00:04:59.134 13:01:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:59.134 13:01:40 version -- app/version.sh@14 -- # cut -f2 00:04:59.134 13:01:40 version -- app/version.sh@14 -- # tr -d '"' 00:04:59.134 13:01:40 version -- app/version.sh@18 -- # minor=1 00:04:59.134 13:01:40 version -- app/version.sh@19 -- # get_header_version patch 00:04:59.134 13:01:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:59.134 13:01:40 version -- app/version.sh@14 -- # cut -f2 00:04:59.134 13:01:40 version -- app/version.sh@14 -- # tr -d '"' 00:04:59.134 13:01:40 version -- app/version.sh@19 -- # patch=0 00:04:59.134 13:01:40 version -- app/version.sh@20 -- # get_header_version suffix 00:04:59.134 13:01:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:59.134 13:01:40 version -- app/version.sh@14 -- # cut -f2 00:04:59.134 13:01:40 version -- app/version.sh@14 -- # tr -d '"' 00:04:59.134 13:01:40 version -- app/version.sh@20 -- # suffix=-pre 00:04:59.134 13:01:40 version -- app/version.sh@22 -- # version=25.1 00:04:59.134 13:01:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:59.134 13:01:40 version -- app/version.sh@28 -- # version=25.1rc0 00:04:59.134 13:01:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:59.134 13:01:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:59.134 13:01:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:59.134 13:01:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:59.134 00:04:59.134 real 0m0.286s 00:04:59.134 user 0m0.167s 00:04:59.134 sys 0m0.168s 00:04:59.134 13:01:40 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.134 13:01:40 version -- common/autotest_common.sh@10 -- # set +x 00:04:59.134 ************************************ 00:04:59.134 END TEST version 00:04:59.134 ************************************ 00:04:59.134 13:01:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:59.134 13:01:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:59.134 13:01:40 -- spdk/autotest.sh@194 -- # uname -s 00:04:59.134 13:01:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:59.134 13:01:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:59.134 13:01:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:59.134 13:01:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:59.134 13:01:40 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:59.134 13:01:40 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:59.134 13:01:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.134 13:01:40 -- common/autotest_common.sh@10 -- # set +x 00:04:59.134 13:01:40 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:59.134 13:01:40 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:59.134 13:01:40 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:59.134 13:01:40 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:59.134 13:01:40 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:04:59.134 13:01:40 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:04:59.134 13:01:40 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:59.134 13:01:40 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:59.134 13:01:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.134 13:01:40 -- common/autotest_common.sh@10 -- # set +x 00:04:59.134 ************************************ 00:04:59.134 START TEST nvmf_tcp 00:04:59.134 ************************************ 00:04:59.134 13:01:41 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:59.396 * Looking for test storage... 00:04:59.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.396 13:01:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.396 --rc genhtml_branch_coverage=1 00:04:59.396 --rc genhtml_function_coverage=1 00:04:59.396 --rc genhtml_legend=1 00:04:59.396 --rc geninfo_all_blocks=1 00:04:59.396 --rc geninfo_unexecuted_blocks=1 00:04:59.396 00:04:59.396 ' 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.396 --rc genhtml_branch_coverage=1 00:04:59.396 --rc genhtml_function_coverage=1 00:04:59.396 --rc genhtml_legend=1 00:04:59.396 --rc geninfo_all_blocks=1 00:04:59.396 --rc geninfo_unexecuted_blocks=1 00:04:59.396 00:04:59.396 ' 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.396 --rc genhtml_branch_coverage=1 00:04:59.396 --rc genhtml_function_coverage=1 00:04:59.396 --rc genhtml_legend=1 00:04:59.396 --rc geninfo_all_blocks=1 00:04:59.396 --rc geninfo_unexecuted_blocks=1 00:04:59.396 00:04:59.396 ' 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.396 --rc genhtml_branch_coverage=1 00:04:59.396 --rc genhtml_function_coverage=1 00:04:59.396 --rc genhtml_legend=1 00:04:59.396 --rc geninfo_all_blocks=1 00:04:59.396 --rc geninfo_unexecuted_blocks=1 00:04:59.396 00:04:59.396 ' 00:04:59.396 13:01:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:59.396 13:01:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:59.396 13:01:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.396 13:01:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.396 ************************************ 00:04:59.396 START TEST nvmf_target_core 00:04:59.396 ************************************ 00:04:59.396 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:59.658 * Looking for test storage... 00:04:59.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:59.658 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.659 --rc genhtml_branch_coverage=1 00:04:59.659 --rc genhtml_function_coverage=1 00:04:59.659 --rc genhtml_legend=1 00:04:59.659 --rc geninfo_all_blocks=1 00:04:59.659 --rc geninfo_unexecuted_blocks=1 00:04:59.659 00:04:59.659 ' 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.659 --rc genhtml_branch_coverage=1 00:04:59.659 --rc genhtml_function_coverage=1 00:04:59.659 --rc genhtml_legend=1 00:04:59.659 --rc geninfo_all_blocks=1 00:04:59.659 --rc geninfo_unexecuted_blocks=1 00:04:59.659 00:04:59.659 ' 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.659 --rc genhtml_branch_coverage=1 00:04:59.659 --rc genhtml_function_coverage=1 00:04:59.659 --rc genhtml_legend=1 00:04:59.659 --rc geninfo_all_blocks=1 00:04:59.659 --rc geninfo_unexecuted_blocks=1 00:04:59.659 00:04:59.659 ' 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.659 --rc genhtml_branch_coverage=1 00:04:59.659 --rc genhtml_function_coverage=1 00:04:59.659 --rc genhtml_legend=1 00:04:59.659 --rc geninfo_all_blocks=1 00:04:59.659 --rc geninfo_unexecuted_blocks=1 00:04:59.659 00:04:59.659 ' 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.659 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:59.660 ************************************ 00:04:59.660 START TEST nvmf_abort 00:04:59.660 ************************************ 00:04:59.660 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:59.923 * Looking for test storage... 00:04:59.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.923 --rc genhtml_branch_coverage=1 00:04:59.923 --rc genhtml_function_coverage=1 00:04:59.923 --rc genhtml_legend=1 00:04:59.923 --rc geninfo_all_blocks=1 00:04:59.923 --rc geninfo_unexecuted_blocks=1 00:04:59.923 00:04:59.923 ' 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.923 --rc genhtml_branch_coverage=1 00:04:59.923 --rc genhtml_function_coverage=1 00:04:59.923 --rc genhtml_legend=1 00:04:59.923 --rc geninfo_all_blocks=1 00:04:59.923 --rc geninfo_unexecuted_blocks=1 00:04:59.923 00:04:59.923 ' 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.923 --rc genhtml_branch_coverage=1 00:04:59.923 --rc genhtml_function_coverage=1 00:04:59.923 --rc genhtml_legend=1 00:04:59.923 --rc geninfo_all_blocks=1 00:04:59.923 --rc geninfo_unexecuted_blocks=1 00:04:59.923 00:04:59.923 ' 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.923 --rc genhtml_branch_coverage=1 00:04:59.923 --rc genhtml_function_coverage=1 00:04:59.923 --rc genhtml_legend=1 00:04:59.923 --rc geninfo_all_blocks=1 00:04:59.923 --rc geninfo_unexecuted_blocks=1 00:04:59.923 00:04:59.923 ' 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.923 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:59.924 13:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:08.068 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:08.068 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:08.068 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:08.069 Found net devices under 0000:31:00.0: cvl_0_0 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:08.069 Found net devices under 0000:31:00.1: cvl_0_1 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:08.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:08.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:05:08.069 00:05:08.069 --- 10.0.0.2 ping statistics --- 00:05:08.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:08.069 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:08.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:08.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:05:08.069 00:05:08.069 --- 10.0.0.1 ping statistics --- 00:05:08.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:08.069 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1503347 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1503347 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1503347 ']' 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.069 13:01:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.069 [2024-11-06 13:01:49.440788] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:05:08.069 [2024-11-06 13:01:49.440853] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:08.069 [2024-11-06 13:01:49.543432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:08.069 [2024-11-06 13:01:49.599405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:08.069 [2024-11-06 13:01:49.599464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:08.069 [2024-11-06 13:01:49.599474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:08.069 [2024-11-06 13:01:49.599481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:08.069 [2024-11-06 13:01:49.599488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:08.069 [2024-11-06 13:01:49.601450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.069 [2024-11-06 13:01:49.601608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.069 [2024-11-06 13:01:49.601609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.643 [2024-11-06 13:01:50.321334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.643 Malloc0 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.643 Delay0 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.643 [2024-11-06 13:01:50.406387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.643 13:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:08.904 [2024-11-06 13:01:50.556853] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:10.820 Initializing NVMe Controllers 00:05:10.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:10.820 controller IO queue size 128 less than required 00:05:10.820 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:10.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:10.820 Initialization complete. Launching workers. 00:05:10.820 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28353 00:05:10.820 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28414, failed to submit 62 00:05:10.820 success 28357, unsuccessful 57, failed 0 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:10.820 rmmod nvme_tcp 00:05:10.820 rmmod nvme_fabrics 00:05:10.820 rmmod nvme_keyring 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1503347 ']' 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1503347 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1503347 ']' 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1503347 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:10.820 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1503347 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1503347' 00:05:11.082 killing process with pid 1503347 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1503347 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1503347 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:11.082 13:01:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:13.632 13:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:13.632 00:05:13.632 real 0m13.435s 00:05:13.632 user 0m13.921s 00:05:13.632 sys 0m6.625s 00:05:13.632 13:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:13.632 13:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:13.632 ************************************ 00:05:13.632 END TEST nvmf_abort 00:05:13.632 ************************************ 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:13.632 ************************************ 00:05:13.632 START TEST nvmf_ns_hotplug_stress 00:05:13.632 ************************************ 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:13.632 * Looking for test storage... 00:05:13.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:13.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.632 --rc genhtml_branch_coverage=1 00:05:13.632 --rc genhtml_function_coverage=1 00:05:13.632 --rc genhtml_legend=1 00:05:13.632 --rc geninfo_all_blocks=1 00:05:13.632 --rc geninfo_unexecuted_blocks=1 00:05:13.632 00:05:13.632 ' 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:13.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.632 --rc genhtml_branch_coverage=1 00:05:13.632 --rc genhtml_function_coverage=1 00:05:13.632 --rc genhtml_legend=1 00:05:13.632 --rc geninfo_all_blocks=1 00:05:13.632 --rc geninfo_unexecuted_blocks=1 00:05:13.632 00:05:13.632 ' 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:13.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.632 --rc genhtml_branch_coverage=1 00:05:13.632 --rc genhtml_function_coverage=1 00:05:13.632 --rc genhtml_legend=1 00:05:13.632 --rc geninfo_all_blocks=1 00:05:13.632 --rc geninfo_unexecuted_blocks=1 00:05:13.632 00:05:13.632 ' 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:13.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.632 --rc genhtml_branch_coverage=1 00:05:13.632 --rc genhtml_function_coverage=1 00:05:13.632 --rc genhtml_legend=1 00:05:13.632 --rc geninfo_all_blocks=1 00:05:13.632 --rc geninfo_unexecuted_blocks=1 00:05:13.632 00:05:13.632 ' 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.632 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:13.633 13:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:21.779 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:21.779 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:21.780 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:21.780 Found net devices under 0000:31:00.0: cvl_0_0 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:21.780 Found net devices under 0000:31:00.1: cvl_0_1 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:21.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:21.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:05:21.780 00:05:21.780 --- 10.0.0.2 ping statistics --- 00:05:21.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:21.780 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:21.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:21.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:05:21.780 00:05:21.780 --- 10.0.0.1 ping statistics --- 00:05:21.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:21.780 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1508319 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1508319 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1508319 ']' 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.780 13:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:21.780 [2024-11-06 13:02:03.009050] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:05:21.780 [2024-11-06 13:02:03.009116] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:21.780 [2024-11-06 13:02:03.109885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.780 [2024-11-06 13:02:03.162062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:21.780 [2024-11-06 13:02:03.162115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:21.780 [2024-11-06 13:02:03.162124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:21.780 [2024-11-06 13:02:03.162131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:21.780 [2024-11-06 13:02:03.162137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:21.780 [2024-11-06 13:02:03.164003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.780 [2024-11-06 13:02:03.164161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.780 [2024-11-06 13:02:03.164163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.042 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.042 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:22.042 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:22.042 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.042 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:22.042 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:22.042 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:22.042 13:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:22.303 [2024-11-06 13:02:04.031776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.303 13:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:22.564 13:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:22.564 [2024-11-06 13:02:04.422511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:22.564 13:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:22.825 13:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:23.086 Malloc0 00:05:23.086 13:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:23.348 Delay0 00:05:23.348 13:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.609 13:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:23.609 NULL1 00:05:23.609 13:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:23.870 13:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1509013 00:05:23.870 13:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:23.870 13:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:23.870 13:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.131 Read completed with error (sct=0, sc=11) 00:05:24.131 13:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.131 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.393 13:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:24.393 13:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:24.393 true 00:05:24.393 13:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:24.393 13:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.336 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.597 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:25.597 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:25.597 true 00:05:25.597 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:25.597 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.858 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.119 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:26.119 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:26.119 true 00:05:26.119 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:26.119 13:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.379 13:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.641 13:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:26.641 13:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:26.641 true 00:05:26.641 13:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:26.641 13:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.600 13:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.883 13:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:27.883 13:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:27.883 true 00:05:27.883 13:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:27.883 13:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.154 13:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.446 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:28.446 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:28.446 true 00:05:28.446 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:28.446 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.707 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.968 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:28.968 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:28.968 true 00:05:28.968 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:28.968 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.227 13:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.227 13:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:29.227 13:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:29.489 true 00:05:29.489 13:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:29.489 13:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.757 13:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.757 13:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:29.757 13:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:30.018 true 00:05:30.018 13:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:30.018 13:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.279 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.279 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:30.279 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:30.539 true 00:05:30.539 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:30.539 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.800 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.800 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:30.800 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:31.059 true 00:05:31.060 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:31.060 13:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.320 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.320 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:31.320 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:31.580 true 00:05:31.580 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:31.580 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.839 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.839 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:31.839 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:32.100 true 00:05:32.100 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:32.100 13:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.361 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.361 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:32.362 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:32.623 true 00:05:32.623 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:32.623 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.883 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.883 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:32.883 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:33.144 true 00:05:33.144 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:33.144 13:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.405 13:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.405 13:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:33.405 13:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:33.666 true 00:05:33.666 13:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:33.666 13:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.927 13:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.189 13:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:34.189 13:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:34.189 true 00:05:34.189 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:34.189 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.449 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.710 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:34.710 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:34.710 true 00:05:34.710 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:34.710 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.971 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.231 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:35.231 13:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:35.231 true 00:05:35.231 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:35.231 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.493 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.753 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:35.754 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:35.754 true 00:05:35.754 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:35.754 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.014 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.274 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:36.274 13:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:36.274 true 00:05:36.274 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:36.274 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.534 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.794 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:36.794 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:36.794 true 00:05:36.794 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:36.794 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.054 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.313 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:37.313 13:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:37.313 true 00:05:37.313 13:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:37.313 13:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.571 13:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.830 13:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:37.830 13:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:37.830 true 00:05:37.830 13:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:37.830 13:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.090 13:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.351 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:38.351 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:38.351 true 00:05:38.351 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:38.351 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.611 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.870 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:38.870 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:38.870 true 00:05:38.870 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:38.870 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.129 13:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.388 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:39.388 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:39.388 true 00:05:39.388 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:39.388 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.648 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.907 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:39.907 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:39.907 true 00:05:39.907 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:39.907 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.166 13:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.426 13:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:40.426 13:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:40.426 true 00:05:40.426 13:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:40.426 13:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.686 13:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.946 13:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:40.946 13:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:40.946 true 00:05:41.206 13:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:41.206 13:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.206 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.466 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:41.466 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:41.466 true 00:05:41.726 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:41.726 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.726 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.988 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:41.988 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:41.988 true 00:05:42.248 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:42.248 13:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.248 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.509 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:42.509 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:42.509 true 00:05:42.509 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:42.509 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.769 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.028 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:43.028 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:43.028 true 00:05:43.289 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:43.289 13:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.289 13:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.550 13:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:43.550 13:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:43.550 true 00:05:43.811 13:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:43.811 13:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.811 13:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.070 13:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:44.070 13:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:44.330 true 00:05:44.330 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:44.330 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.330 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.591 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:44.591 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:44.851 true 00:05:44.851 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:44.851 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.851 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.112 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:45.112 13:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:45.374 true 00:05:45.374 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:45.374 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.374 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.634 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:45.634 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:45.895 true 00:05:45.895 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:45.895 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.895 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.156 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:46.156 13:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:46.416 true 00:05:46.416 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:46.416 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.416 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.676 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:46.676 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:46.936 true 00:05:46.936 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:46.936 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.936 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.195 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:47.195 13:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:47.455 true 00:05:47.455 13:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:47.455 13:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.835 13:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.835 13:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:48.835 13:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:48.835 true 00:05:48.835 13:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:48.835 13:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.772 13:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.033 13:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:50.033 13:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:50.033 true 00:05:50.033 13:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:50.033 13:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.294 13:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.554 13:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:50.554 13:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:50.554 true 00:05:50.554 13:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:50.554 13:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.935 13:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.935 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.935 13:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:51.935 13:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:52.196 true 00:05:52.196 13:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:52.196 13:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.136 13:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.136 13:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:53.136 13:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:53.396 true 00:05:53.396 13:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:53.396 13:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.657 13:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.657 13:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:53.657 13:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:53.918 true 00:05:53.918 13:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:53.918 13:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.179 13:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.179 Initializing NVMe Controllers 00:05:54.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:54.179 Controller IO queue size 128, less than required. 00:05:54.179 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:54.179 Controller IO queue size 128, less than required. 00:05:54.179 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:54.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:54.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:54.179 Initialization complete. Launching workers. 00:05:54.179 ======================================================== 00:05:54.179 Latency(us) 00:05:54.179 Device Information : IOPS MiB/s Average min max 00:05:54.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1009.36 0.49 30252.54 1521.70 1047216.35 00:05:54.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7583.78 3.70 16878.92 1204.80 481300.31 00:05:54.179 ======================================================== 00:05:54.179 Total : 8593.14 4.20 18449.79 1204.80 1047216.35 00:05:54.179 00:05:54.179 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:54.179 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:54.439 true 00:05:54.439 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1509013 00:05:54.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1509013) - No such process 00:05:54.439 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1509013 00:05:54.439 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.699 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.699 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:54.699 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:54.699 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:54.699 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:54.699 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:54.960 null0 00:05:54.960 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:54.960 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:54.960 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:55.220 null1 00:05:55.220 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.220 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.220 13:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:55.220 null2 00:05:55.220 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.220 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.220 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:55.481 null3 00:05:55.481 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.481 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.481 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:55.741 null4 00:05:55.741 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.741 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.741 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:55.741 null5 00:05:56.001 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.001 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.001 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:56.001 null6 00:05:56.001 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.001 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.001 13:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:56.262 null7 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1515548 1515549 1515552 1515553 1515555 1515557 1515559 1515561 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.263 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.524 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.784 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.045 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.305 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.305 13:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.305 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.566 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.828 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.828 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.828 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.828 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.828 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.828 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.829 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.089 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.090 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.090 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.090 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.090 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.090 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.090 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.350 13:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.350 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.611 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.872 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.873 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.134 13:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.134 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.134 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.134 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.134 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.398 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.399 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.399 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.659 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.660 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.660 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.660 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.660 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.660 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.660 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.660 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.660 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.660 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.920 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:00.180 13:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:00.180 rmmod nvme_tcp 00:06:00.180 rmmod nvme_fabrics 00:06:00.180 rmmod nvme_keyring 00:06:00.180 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:00.180 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:00.180 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:00.180 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1508319 ']' 00:06:00.180 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1508319 00:06:00.180 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1508319 ']' 00:06:00.180 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1508319 00:06:00.180 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:00.181 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:00.181 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1508319 00:06:00.181 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:00.181 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:00.181 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1508319' 00:06:00.181 killing process with pid 1508319 00:06:00.181 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1508319 00:06:00.181 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1508319 00:06:00.441 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:00.442 13:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.987 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:02.987 00:06:02.987 real 0m49.195s 00:06:02.987 user 3m18.714s 00:06:02.987 sys 0m16.584s 00:06:02.987 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:02.988 ************************************ 00:06:02.988 END TEST nvmf_ns_hotplug_stress 00:06:02.988 ************************************ 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:02.988 ************************************ 00:06:02.988 START TEST nvmf_delete_subsystem 00:06:02.988 ************************************ 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:02.988 * Looking for test storage... 00:06:02.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:02.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.988 --rc genhtml_branch_coverage=1 00:06:02.988 --rc genhtml_function_coverage=1 00:06:02.988 --rc genhtml_legend=1 00:06:02.988 --rc geninfo_all_blocks=1 00:06:02.988 --rc geninfo_unexecuted_blocks=1 00:06:02.988 00:06:02.988 ' 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:02.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.988 --rc genhtml_branch_coverage=1 00:06:02.988 --rc genhtml_function_coverage=1 00:06:02.988 --rc genhtml_legend=1 00:06:02.988 --rc geninfo_all_blocks=1 00:06:02.988 --rc geninfo_unexecuted_blocks=1 00:06:02.988 00:06:02.988 ' 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:02.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.988 --rc genhtml_branch_coverage=1 00:06:02.988 --rc genhtml_function_coverage=1 00:06:02.988 --rc genhtml_legend=1 00:06:02.988 --rc geninfo_all_blocks=1 00:06:02.988 --rc geninfo_unexecuted_blocks=1 00:06:02.988 00:06:02.988 ' 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:02.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.988 --rc genhtml_branch_coverage=1 00:06:02.988 --rc genhtml_function_coverage=1 00:06:02.988 --rc genhtml_legend=1 00:06:02.988 --rc geninfo_all_blocks=1 00:06:02.988 --rc geninfo_unexecuted_blocks=1 00:06:02.988 00:06:02.988 ' 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.988 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:02.989 13:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:11.248 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:11.248 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.248 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:11.249 Found net devices under 0000:31:00.0: cvl_0_0 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:11.249 Found net devices under 0000:31:00.1: cvl_0_1 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:11.249 13:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:11.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:11.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:06:11.249 00:06:11.249 --- 10.0.0.2 ping statistics --- 00:06:11.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.249 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:11.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:11.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:06:11.249 00:06:11.249 --- 10.0.0.1 ping statistics --- 00:06:11.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.249 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1520775 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1520775 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1520775 ']' 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:11.249 13:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.249 [2024-11-06 13:02:52.229881] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:06:11.249 [2024-11-06 13:02:52.229945] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.249 [2024-11-06 13:02:52.330731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.249 [2024-11-06 13:02:52.382635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:11.249 [2024-11-06 13:02:52.382689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:11.249 [2024-11-06 13:02:52.382698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:11.249 [2024-11-06 13:02:52.382704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:11.249 [2024-11-06 13:02:52.382711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:11.249 [2024-11-06 13:02:52.384543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.249 [2024-11-06 13:02:52.384547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.249 [2024-11-06 13:02:53.087451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:11.249 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.250 [2024-11-06 13:02:53.111721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.250 NULL1 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.250 Delay0 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.250 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.510 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.510 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1521122 00:06:11.510 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:11.510 13:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:11.510 [2024-11-06 13:02:53.238728] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:13.425 13:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:13.425 13:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.425 13:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 [2024-11-06 13:02:55.365420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e5f00 is same with the state(6) to be set 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 starting I/O failed: -6 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Read completed with error (sct=0, sc=8) 00:06:13.687 Write completed with error (sct=0, sc=8) 00:06:13.688 starting I/O failed: -6 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 starting I/O failed: -6 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 starting I/O failed: -6 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 starting I/O failed: -6 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 [2024-11-06 13:02:55.368959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6c0400d680 is same with the state(6) to be set 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:13.688 Read completed with error (sct=0, sc=8) 00:06:13.688 Write completed with error (sct=0, sc=8) 00:06:14.630 [2024-11-06 13:02:56.337478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e75e0 is same with the state(6) to be set 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Write completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Write completed with error (sct=0, sc=8) 00:06:14.630 Write completed with error (sct=0, sc=8) 00:06:14.630 Write completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 [2024-11-06 13:02:56.368728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e60e0 is same with the state(6) to be set 00:06:14.630 Write completed with error (sct=0, sc=8) 00:06:14.630 Write completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Write completed with error (sct=0, sc=8) 00:06:14.630 Write completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.630 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 [2024-11-06 13:02:56.369256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e64a0 is same with the state(6) to be set 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 [2024-11-06 13:02:56.370118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6c0400d350 is same with the state(6) to be set 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Write completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 Read completed with error (sct=0, sc=8) 00:06:14.631 [2024-11-06 13:02:56.370433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6c04000c40 is same with the state(6) to be set 00:06:14.631 Initializing NVMe Controllers 00:06:14.631 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:14.631 Controller IO queue size 128, less than required. 00:06:14.631 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:14.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:14.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:14.631 Initialization complete. Launching workers. 00:06:14.631 ======================================================== 00:06:14.631 Latency(us) 00:06:14.631 Device Information : IOPS MiB/s Average min max 00:06:14.631 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.67 0.08 895289.87 363.50 1007327.84 00:06:14.631 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.26 0.07 942679.85 293.88 1011224.01 00:06:14.631 ======================================================== 00:06:14.631 Total : 320.92 0.16 917625.62 293.88 1011224.01 00:06:14.631 00:06:14.631 [2024-11-06 13:02:56.370862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e75e0 (9): Bad file descriptor 00:06:14.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:14.631 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.631 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:14.631 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1521122 00:06:14.631 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1521122 00:06:15.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1521122) - No such process 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1521122 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1521122 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1521122 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.203 [2024-11-06 13:02:56.901784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1521806 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1521806 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:15.203 13:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.203 [2024-11-06 13:02:57.000849] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:15.775 13:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:15.775 13:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1521806 00:06:15.775 13:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:16.035 13:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:16.035 13:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1521806 00:06:16.035 13:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:16.604 13:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:16.604 13:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1521806 00:06:16.604 13:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.172 13:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.172 13:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1521806 00:06:17.172 13:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.742 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.742 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1521806 00:06:17.742 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.311 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.311 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1521806 00:06:18.311 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.311 Initializing NVMe Controllers 00:06:18.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:18.311 Controller IO queue size 128, less than required. 00:06:18.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:18.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:18.311 Initialization complete. Launching workers. 00:06:18.311 ======================================================== 00:06:18.311 Latency(us) 00:06:18.311 Device Information : IOPS MiB/s Average min max 00:06:18.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001867.97 1000156.19 1004555.11 00:06:18.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002928.15 1000364.19 1007426.14 00:06:18.311 ======================================================== 00:06:18.311 Total : 256.00 0.12 1002398.06 1000156.19 1007426.14 00:06:18.311 00:06:18.571 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.571 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1521806 00:06:18.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1521806) - No such process 00:06:18.571 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1521806 00:06:18.571 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:18.571 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:18.571 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:18.571 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:18.572 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:18.572 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:18.572 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:18.572 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:18.572 rmmod nvme_tcp 00:06:18.833 rmmod nvme_fabrics 00:06:18.833 rmmod nvme_keyring 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1520775 ']' 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1520775 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1520775 ']' 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1520775 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1520775 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1520775' 00:06:18.833 killing process with pid 1520775 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1520775 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1520775 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.833 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:21.377 00:06:21.377 real 0m18.415s 00:06:21.377 user 0m30.746s 00:06:21.377 sys 0m6.838s 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.377 ************************************ 00:06:21.377 END TEST nvmf_delete_subsystem 00:06:21.377 ************************************ 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:21.377 ************************************ 00:06:21.377 START TEST nvmf_host_management 00:06:21.377 ************************************ 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:21.377 * Looking for test storage... 00:06:21.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:21.377 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.377 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:21.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.378 --rc genhtml_branch_coverage=1 00:06:21.378 --rc genhtml_function_coverage=1 00:06:21.378 --rc genhtml_legend=1 00:06:21.378 --rc geninfo_all_blocks=1 00:06:21.378 --rc geninfo_unexecuted_blocks=1 00:06:21.378 00:06:21.378 ' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:21.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.378 --rc genhtml_branch_coverage=1 00:06:21.378 --rc genhtml_function_coverage=1 00:06:21.378 --rc genhtml_legend=1 00:06:21.378 --rc geninfo_all_blocks=1 00:06:21.378 --rc geninfo_unexecuted_blocks=1 00:06:21.378 00:06:21.378 ' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:21.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.378 --rc genhtml_branch_coverage=1 00:06:21.378 --rc genhtml_function_coverage=1 00:06:21.378 --rc genhtml_legend=1 00:06:21.378 --rc geninfo_all_blocks=1 00:06:21.378 --rc geninfo_unexecuted_blocks=1 00:06:21.378 00:06:21.378 ' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:21.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.378 --rc genhtml_branch_coverage=1 00:06:21.378 --rc genhtml_function_coverage=1 00:06:21.378 --rc genhtml_legend=1 00:06:21.378 --rc geninfo_all_blocks=1 00:06:21.378 --rc geninfo_unexecuted_blocks=1 00:06:21.378 00:06:21.378 ' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:21.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.378 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.379 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:21.379 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:21.379 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:21.379 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:29.529 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:29.530 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:29.530 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:29.530 Found net devices under 0000:31:00.0: cvl_0_0 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:29.530 Found net devices under 0000:31:00.1: cvl_0_1 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:29.530 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:29.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:06:29.531 00:06:29.531 --- 10.0.0.2 ping statistics --- 00:06:29.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.531 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:06:29.531 00:06:29.531 --- 10.0.0.1 ping statistics --- 00:06:29.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.531 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1527262 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1527262 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1527262 ']' 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.531 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:29.531 [2024-11-06 13:03:10.835737] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:06:29.531 [2024-11-06 13:03:10.835808] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.531 [2024-11-06 13:03:10.938440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.531 [2024-11-06 13:03:10.992530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:29.531 [2024-11-06 13:03:10.992587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:29.531 [2024-11-06 13:03:10.992596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:29.531 [2024-11-06 13:03:10.992603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:29.531 [2024-11-06 13:03:10.992610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:29.531 [2024-11-06 13:03:10.994742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.531 [2024-11-06 13:03:10.994901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.531 [2024-11-06 13:03:10.995141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.531 [2024-11-06 13:03:10.995143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.794 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.794 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:29.794 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:29.794 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.794 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.055 [2024-11-06 13:03:11.716698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.055 Malloc0 00:06:30.055 [2024-11-06 13:03:11.796648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1527672 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1527672 /var/tmp/bdevperf.sock 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1527672 ']' 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:30.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:30.055 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:30.056 { 00:06:30.056 "params": { 00:06:30.056 "name": "Nvme$subsystem", 00:06:30.056 "trtype": "$TEST_TRANSPORT", 00:06:30.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:30.056 "adrfam": "ipv4", 00:06:30.056 "trsvcid": "$NVMF_PORT", 00:06:30.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:30.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:30.056 "hdgst": ${hdgst:-false}, 00:06:30.056 "ddgst": ${ddgst:-false} 00:06:30.056 }, 00:06:30.056 "method": "bdev_nvme_attach_controller" 00:06:30.056 } 00:06:30.056 EOF 00:06:30.056 )") 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:30.056 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:30.056 "params": { 00:06:30.056 "name": "Nvme0", 00:06:30.056 "trtype": "tcp", 00:06:30.056 "traddr": "10.0.0.2", 00:06:30.056 "adrfam": "ipv4", 00:06:30.056 "trsvcid": "4420", 00:06:30.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:30.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:30.056 "hdgst": false, 00:06:30.056 "ddgst": false 00:06:30.056 }, 00:06:30.056 "method": "bdev_nvme_attach_controller" 00:06:30.056 }' 00:06:30.056 [2024-11-06 13:03:11.906730] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:06:30.056 [2024-11-06 13:03:11.906808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527672 ] 00:06:30.316 [2024-11-06 13:03:12.000733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.316 [2024-11-06 13:03:12.055004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.577 Running I/O for 10 seconds... 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.150 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.150 [2024-11-06 13:03:12.807829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.150 [2024-11-06 13:03:12.807890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.150 [2024-11-06 13:03:12.807913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.150 [2024-11-06 13:03:12.807922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.150 [2024-11-06 13:03:12.807933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.150 [2024-11-06 13:03:12.807941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.150 [2024-11-06 13:03:12.807952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.150 [2024-11-06 13:03:12.807960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.150 [2024-11-06 13:03:12.807970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.150 [2024-11-06 13:03:12.807978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.150 [2024-11-06 13:03:12.807988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.150 [2024-11-06 13:03:12.808005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.150 [2024-11-06 13:03:12.808015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.151 [2024-11-06 13:03:12.808696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.151 [2024-11-06 13:03:12.808705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.808984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.808992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.809002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.809010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.809019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.809029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.809039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.809047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.809056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:31.152 [2024-11-06 13:03:12.809064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:31.152 [2024-11-06 13:03:12.809073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3cc60 is same with the state(6) to be set 00:06:31.152 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.152 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:31.152 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.152 [2024-11-06 13:03:12.810396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:31.152 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.152 task offset: 106368 on job bdev=Nvme0n1 fails 00:06:31.152 00:06:31.152 Latency(us) 00:06:31.152 [2024-11-06T12:03:13.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.152 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:31.152 Job: Nvme0n1 ended in about 0.59 seconds with error 00:06:31.152 Verification LBA range: start 0x0 length 0x400 00:06:31.152 Nvme0n1 : 0.59 1310.85 81.93 109.24 0.00 44032.28 5952.85 38229.33 00:06:31.152 [2024-11-06T12:03:13.054Z] =================================================================================================================== 00:06:31.152 [2024-11-06T12:03:13.054Z] Total : 1310.85 81.93 109.24 0.00 44032.28 5952.85 38229.33 00:06:31.152 [2024-11-06 13:03:12.812657] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:31.152 [2024-11-06 13:03:12.812699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2c280 (9): Bad file descriptor 00:06:31.152 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.152 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:31.152 [2024-11-06 13:03:12.824487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1527672 00:06:32.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1527672) - No such process 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:32.093 { 00:06:32.093 "params": { 00:06:32.093 "name": "Nvme$subsystem", 00:06:32.093 "trtype": "$TEST_TRANSPORT", 00:06:32.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:32.093 "adrfam": "ipv4", 00:06:32.093 "trsvcid": "$NVMF_PORT", 00:06:32.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:32.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:32.093 "hdgst": ${hdgst:-false}, 00:06:32.093 "ddgst": ${ddgst:-false} 00:06:32.093 }, 00:06:32.093 "method": "bdev_nvme_attach_controller" 00:06:32.093 } 00:06:32.093 EOF 00:06:32.093 )") 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:32.093 13:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:32.093 "params": { 00:06:32.093 "name": "Nvme0", 00:06:32.093 "trtype": "tcp", 00:06:32.093 "traddr": "10.0.0.2", 00:06:32.093 "adrfam": "ipv4", 00:06:32.093 "trsvcid": "4420", 00:06:32.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:32.093 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:32.093 "hdgst": false, 00:06:32.093 "ddgst": false 00:06:32.093 }, 00:06:32.093 "method": "bdev_nvme_attach_controller" 00:06:32.093 }' 00:06:32.093 [2024-11-06 13:03:13.883256] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:06:32.093 [2024-11-06 13:03:13.883309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528148 ] 00:06:32.093 [2024-11-06 13:03:13.972548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.353 [2024-11-06 13:03:14.007897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.613 Running I/O for 1 seconds... 00:06:33.555 1661.00 IOPS, 103.81 MiB/s 00:06:33.555 Latency(us) 00:06:33.555 [2024-11-06T12:03:15.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.555 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:33.555 Verification LBA range: start 0x0 length 0x400 00:06:33.555 Nvme0n1 : 1.02 1687.58 105.47 0.00 0.00 37249.56 6881.28 33423.36 00:06:33.555 [2024-11-06T12:03:15.457Z] =================================================================================================================== 00:06:33.555 [2024-11-06T12:03:15.457Z] Total : 1687.58 105.47 0.00 0.00 37249.56 6881.28 33423.36 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:33.815 rmmod nvme_tcp 00:06:33.815 rmmod nvme_fabrics 00:06:33.815 rmmod nvme_keyring 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1527262 ']' 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1527262 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1527262 ']' 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1527262 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1527262 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1527262' 00:06:33.815 killing process with pid 1527262 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1527262 00:06:33.815 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1527262 00:06:33.815 [2024-11-06 13:03:15.703129] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.075 13:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.986 13:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:35.986 13:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:35.986 00:06:35.986 real 0m14.968s 00:06:35.986 user 0m23.729s 00:06:35.986 sys 0m6.950s 00:06:35.986 13:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.986 13:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.986 ************************************ 00:06:35.986 END TEST nvmf_host_management 00:06:35.986 ************************************ 00:06:35.986 13:03:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:35.986 13:03:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:35.986 13:03:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.986 13:03:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:36.248 ************************************ 00:06:36.248 START TEST nvmf_lvol 00:06:36.248 ************************************ 00:06:36.248 13:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:36.248 * Looking for test storage... 00:06:36.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.248 13:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.248 --rc genhtml_branch_coverage=1 00:06:36.248 --rc genhtml_function_coverage=1 00:06:36.248 --rc genhtml_legend=1 00:06:36.248 --rc geninfo_all_blocks=1 00:06:36.248 --rc geninfo_unexecuted_blocks=1 00:06:36.248 00:06:36.248 ' 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.248 --rc genhtml_branch_coverage=1 00:06:36.248 --rc genhtml_function_coverage=1 00:06:36.248 --rc genhtml_legend=1 00:06:36.248 --rc geninfo_all_blocks=1 00:06:36.248 --rc geninfo_unexecuted_blocks=1 00:06:36.248 00:06:36.248 ' 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.248 --rc genhtml_branch_coverage=1 00:06:36.248 --rc genhtml_function_coverage=1 00:06:36.248 --rc genhtml_legend=1 00:06:36.248 --rc geninfo_all_blocks=1 00:06:36.248 --rc geninfo_unexecuted_blocks=1 00:06:36.248 00:06:36.248 ' 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:36.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.248 --rc genhtml_branch_coverage=1 00:06:36.248 --rc genhtml_function_coverage=1 00:06:36.248 --rc genhtml_legend=1 00:06:36.248 --rc geninfo_all_blocks=1 00:06:36.248 --rc geninfo_unexecuted_blocks=1 00:06:36.248 00:06:36.248 ' 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.248 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:36.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:36.249 13:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.390 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:44.391 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:44.391 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:44.391 Found net devices under 0000:31:00.0: cvl_0_0 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:44.391 Found net devices under 0000:31:00.1: cvl_0_1 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:44.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:06:44.391 00:06:44.391 --- 10.0.0.2 ping statistics --- 00:06:44.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.391 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:06:44.391 00:06:44.391 --- 10.0.0.1 ping statistics --- 00:06:44.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.391 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1532737 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1532737 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1532737 ']' 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:44.391 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.391 [2024-11-06 13:03:25.841736] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:06:44.392 [2024-11-06 13:03:25.841820] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.392 [2024-11-06 13:03:25.941879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.392 [2024-11-06 13:03:25.994198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.392 [2024-11-06 13:03:25.994247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.392 [2024-11-06 13:03:25.994260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.392 [2024-11-06 13:03:25.994268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.392 [2024-11-06 13:03:25.994274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.392 [2024-11-06 13:03:25.996397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.392 [2024-11-06 13:03:25.996553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.392 [2024-11-06 13:03:25.996553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.962 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.962 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:44.962 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:44.962 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.962 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.962 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.962 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:45.223 [2024-11-06 13:03:26.872539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.223 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:45.485 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:45.485 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:45.485 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:45.485 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:45.745 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:46.006 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=417e970e-8563-4096-aec2-2c3c412425fc 00:06:46.006 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 417e970e-8563-4096-aec2-2c3c412425fc lvol 20 00:06:46.268 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4f87d9da-a518-42f6-818a-1b2b65f36836 00:06:46.268 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:46.268 13:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4f87d9da-a518-42f6-818a-1b2b65f36836 00:06:46.528 13:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:46.786 [2024-11-06 13:03:28.525006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.786 13:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.046 13:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1533251 00:06:47.046 13:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:47.046 13:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:47.982 13:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4f87d9da-a518-42f6-818a-1b2b65f36836 MY_SNAPSHOT 00:06:48.242 13:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d3a68815-d814-4ff7-9ea9-6962a3417755 00:06:48.242 13:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4f87d9da-a518-42f6-818a-1b2b65f36836 30 00:06:48.502 13:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d3a68815-d814-4ff7-9ea9-6962a3417755 MY_CLONE 00:06:48.503 13:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=26ad75db-bb05-4741-b6b4-c73cb67617db 00:06:48.503 13:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 26ad75db-bb05-4741-b6b4-c73cb67617db 00:06:49.074 13:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1533251 00:06:57.214 Initializing NVMe Controllers 00:06:57.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:57.214 Controller IO queue size 128, less than required. 00:06:57.214 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:57.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:57.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:57.214 Initialization complete. Launching workers. 00:06:57.214 ======================================================== 00:06:57.214 Latency(us) 00:06:57.214 Device Information : IOPS MiB/s Average min max 00:06:57.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16159.70 63.12 7922.11 1483.30 53035.48 00:06:57.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16576.40 64.75 7722.68 3800.57 57915.51 00:06:57.214 ======================================================== 00:06:57.214 Total : 32736.10 127.88 7821.13 1483.30 57915.51 00:06:57.214 00:06:57.214 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:57.475 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f87d9da-a518-42f6-818a-1b2b65f36836 00:06:57.736 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 417e970e-8563-4096-aec2-2c3c412425fc 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.996 rmmod nvme_tcp 00:06:57.996 rmmod nvme_fabrics 00:06:57.996 rmmod nvme_keyring 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1532737 ']' 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1532737 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1532737 ']' 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1532737 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1532737 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1532737' 00:06:57.996 killing process with pid 1532737 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1532737 00:06:57.996 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1532737 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.257 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.168 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:00.168 00:07:00.168 real 0m24.124s 00:07:00.168 user 1m4.962s 00:07:00.168 sys 0m8.850s 00:07:00.168 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.168 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.168 ************************************ 00:07:00.168 END TEST nvmf_lvol 00:07:00.168 ************************************ 00:07:00.168 13:03:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.168 13:03:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:00.168 13:03:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.168 13:03:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.430 ************************************ 00:07:00.430 START TEST nvmf_lvs_grow 00:07:00.430 ************************************ 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.430 * Looking for test storage... 00:07:00.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:00.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.430 --rc genhtml_branch_coverage=1 00:07:00.430 --rc genhtml_function_coverage=1 00:07:00.430 --rc genhtml_legend=1 00:07:00.430 --rc geninfo_all_blocks=1 00:07:00.430 --rc geninfo_unexecuted_blocks=1 00:07:00.430 00:07:00.430 ' 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:00.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.430 --rc genhtml_branch_coverage=1 00:07:00.430 --rc genhtml_function_coverage=1 00:07:00.430 --rc genhtml_legend=1 00:07:00.430 --rc geninfo_all_blocks=1 00:07:00.430 --rc geninfo_unexecuted_blocks=1 00:07:00.430 00:07:00.430 ' 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:00.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.430 --rc genhtml_branch_coverage=1 00:07:00.430 --rc genhtml_function_coverage=1 00:07:00.430 --rc genhtml_legend=1 00:07:00.430 --rc geninfo_all_blocks=1 00:07:00.430 --rc geninfo_unexecuted_blocks=1 00:07:00.430 00:07:00.430 ' 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:00.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.430 --rc genhtml_branch_coverage=1 00:07:00.430 --rc genhtml_function_coverage=1 00:07:00.430 --rc genhtml_legend=1 00:07:00.430 --rc geninfo_all_blocks=1 00:07:00.430 --rc geninfo_unexecuted_blocks=1 00:07:00.430 00:07:00.430 ' 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.430 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.431 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.691 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.691 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.691 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.691 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.691 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.692 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:08.835 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:08.835 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:08.835 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:08.836 Found net devices under 0000:31:00.0: cvl_0_0 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:08.836 Found net devices under 0000:31:00.1: cvl_0_1 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:08.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:07:08.836 00:07:08.836 --- 10.0.0.2 ping statistics --- 00:07:08.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.836 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:07:08.836 00:07:08.836 --- 10.0.0.1 ping statistics --- 00:07:08.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.836 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1539872 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1539872 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1539872 ']' 00:07:08.836 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.837 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.837 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.837 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.837 13:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.837 [2024-11-06 13:03:50.039056] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:07:08.837 [2024-11-06 13:03:50.039122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.837 [2024-11-06 13:03:50.139588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.837 [2024-11-06 13:03:50.192753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.837 [2024-11-06 13:03:50.192811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.837 [2024-11-06 13:03:50.192820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.837 [2024-11-06 13:03:50.192827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.837 [2024-11-06 13:03:50.192833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.837 [2024-11-06 13:03:50.193692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.099 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.099 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:09.099 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:09.099 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.100 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.100 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.100 13:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:09.361 [2024-11-06 13:03:51.083979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.361 ************************************ 00:07:09.361 START TEST lvs_grow_clean 00:07:09.361 ************************************ 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.361 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.362 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:09.623 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:09.623 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:09.884 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:09.884 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:09.884 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:09.884 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:09.884 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:09.884 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 lvol 150 00:07:10.146 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9bcfa459-573e-470f-b061-97ff63f402f0 00:07:10.146 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.146 13:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:10.407 [2024-11-06 13:03:52.125570] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:10.407 [2024-11-06 13:03:52.125646] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:10.407 true 00:07:10.407 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:10.407 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:10.668 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:10.668 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:10.668 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9bcfa459-573e-470f-b061-97ff63f402f0 00:07:10.929 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:11.191 [2024-11-06 13:03:52.851904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.191 13:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1540388 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1540388 /var/tmp/bdevperf.sock 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1540388 ']' 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:11.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.191 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:11.452 [2024-11-06 13:03:53.097336] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:07:11.452 [2024-11-06 13:03:53.097404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540388 ] 00:07:11.452 [2024-11-06 13:03:53.189888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.452 [2024-11-06 13:03:53.243183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.024 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.024 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:12.024 13:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:12.285 Nvme0n1 00:07:12.285 13:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:12.553 [ 00:07:12.553 { 00:07:12.553 "name": "Nvme0n1", 00:07:12.553 "aliases": [ 00:07:12.553 "9bcfa459-573e-470f-b061-97ff63f402f0" 00:07:12.553 ], 00:07:12.553 "product_name": "NVMe disk", 00:07:12.553 "block_size": 4096, 00:07:12.553 "num_blocks": 38912, 00:07:12.553 "uuid": "9bcfa459-573e-470f-b061-97ff63f402f0", 00:07:12.553 "numa_id": 0, 00:07:12.553 "assigned_rate_limits": { 00:07:12.553 "rw_ios_per_sec": 0, 00:07:12.553 "rw_mbytes_per_sec": 0, 00:07:12.553 "r_mbytes_per_sec": 0, 00:07:12.553 "w_mbytes_per_sec": 0 00:07:12.553 }, 00:07:12.553 "claimed": false, 00:07:12.553 "zoned": false, 00:07:12.553 "supported_io_types": { 00:07:12.553 "read": true, 00:07:12.553 "write": true, 00:07:12.553 "unmap": true, 00:07:12.553 "flush": true, 00:07:12.553 "reset": true, 00:07:12.553 "nvme_admin": true, 00:07:12.553 "nvme_io": true, 00:07:12.553 "nvme_io_md": false, 00:07:12.553 "write_zeroes": true, 00:07:12.553 "zcopy": false, 00:07:12.553 "get_zone_info": false, 00:07:12.553 "zone_management": false, 00:07:12.553 "zone_append": false, 00:07:12.553 "compare": true, 00:07:12.553 "compare_and_write": true, 00:07:12.553 "abort": true, 00:07:12.553 "seek_hole": false, 00:07:12.553 "seek_data": false, 00:07:12.553 "copy": true, 00:07:12.553 "nvme_iov_md": false 00:07:12.553 }, 00:07:12.553 "memory_domains": [ 00:07:12.553 { 00:07:12.553 "dma_device_id": "system", 00:07:12.553 "dma_device_type": 1 00:07:12.553 } 00:07:12.553 ], 00:07:12.553 "driver_specific": { 00:07:12.553 "nvme": [ 00:07:12.553 { 00:07:12.553 "trid": { 00:07:12.553 "trtype": "TCP", 00:07:12.553 "adrfam": "IPv4", 00:07:12.553 "traddr": "10.0.0.2", 00:07:12.553 "trsvcid": "4420", 00:07:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:12.553 }, 00:07:12.553 "ctrlr_data": { 00:07:12.553 "cntlid": 1, 00:07:12.553 "vendor_id": "0x8086", 00:07:12.553 "model_number": "SPDK bdev Controller", 00:07:12.553 "serial_number": "SPDK0", 00:07:12.553 "firmware_revision": "25.01", 00:07:12.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:12.553 "oacs": { 00:07:12.553 "security": 0, 00:07:12.553 "format": 0, 00:07:12.553 "firmware": 0, 00:07:12.553 "ns_manage": 0 00:07:12.553 }, 00:07:12.553 "multi_ctrlr": true, 00:07:12.553 "ana_reporting": false 00:07:12.553 }, 00:07:12.553 "vs": { 00:07:12.553 "nvme_version": "1.3" 00:07:12.553 }, 00:07:12.553 "ns_data": { 00:07:12.553 "id": 1, 00:07:12.553 "can_share": true 00:07:12.553 } 00:07:12.553 } 00:07:12.553 ], 00:07:12.553 "mp_policy": "active_passive" 00:07:12.553 } 00:07:12.553 } 00:07:12.553 ] 00:07:12.553 13:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1540704 00:07:12.553 13:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:12.553 13:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:12.845 Running I/O for 10 seconds... 00:07:13.896 Latency(us) 00:07:13.896 [2024-11-06T12:03:55.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.896 Nvme0n1 : 1.00 24614.00 96.15 0.00 0.00 0.00 0.00 0.00 00:07:13.896 [2024-11-06T12:03:55.798Z] =================================================================================================================== 00:07:13.896 [2024-11-06T12:03:55.798Z] Total : 24614.00 96.15 0.00 0.00 0.00 0.00 0.00 00:07:13.896 00:07:14.836 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:14.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.836 Nvme0n1 : 2.00 25065.50 97.91 0.00 0.00 0.00 0.00 0.00 00:07:14.836 [2024-11-06T12:03:56.738Z] =================================================================================================================== 00:07:14.836 [2024-11-06T12:03:56.738Z] Total : 25065.50 97.91 0.00 0.00 0.00 0.00 0.00 00:07:14.836 00:07:14.836 true 00:07:14.836 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:14.836 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:14.836 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:14.836 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:14.836 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1540704 00:07:15.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.776 Nvme0n1 : 3.00 25221.00 98.52 0.00 0.00 0.00 0.00 0.00 00:07:15.776 [2024-11-06T12:03:57.678Z] =================================================================================================================== 00:07:15.776 [2024-11-06T12:03:57.678Z] Total : 25221.00 98.52 0.00 0.00 0.00 0.00 0.00 00:07:15.776 00:07:16.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.715 Nvme0n1 : 4.00 25309.00 98.86 0.00 0.00 0.00 0.00 0.00 00:07:16.715 [2024-11-06T12:03:58.617Z] =================================================================================================================== 00:07:16.715 [2024-11-06T12:03:58.617Z] Total : 25309.00 98.86 0.00 0.00 0.00 0.00 0.00 00:07:16.715 00:07:17.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.657 Nvme0n1 : 5.00 25367.00 99.09 0.00 0.00 0.00 0.00 0.00 00:07:17.657 [2024-11-06T12:03:59.559Z] =================================================================================================================== 00:07:17.657 [2024-11-06T12:03:59.559Z] Total : 25367.00 99.09 0.00 0.00 0.00 0.00 0.00 00:07:17.657 00:07:18.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.597 Nvme0n1 : 6.00 25393.67 99.19 0.00 0.00 0.00 0.00 0.00 00:07:18.597 [2024-11-06T12:04:00.499Z] =================================================================================================================== 00:07:18.597 [2024-11-06T12:04:00.499Z] Total : 25393.67 99.19 0.00 0.00 0.00 0.00 0.00 00:07:18.597 00:07:19.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.975 Nvme0n1 : 7.00 25422.71 99.31 0.00 0.00 0.00 0.00 0.00 00:07:19.975 [2024-11-06T12:04:01.877Z] =================================================================================================================== 00:07:19.975 [2024-11-06T12:04:01.877Z] Total : 25422.71 99.31 0.00 0.00 0.00 0.00 0.00 00:07:19.975 00:07:20.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.916 Nvme0n1 : 8.00 25450.38 99.42 0.00 0.00 0.00 0.00 0.00 00:07:20.916 [2024-11-06T12:04:02.818Z] =================================================================================================================== 00:07:20.916 [2024-11-06T12:04:02.818Z] Total : 25450.38 99.42 0.00 0.00 0.00 0.00 0.00 00:07:20.916 00:07:21.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.858 Nvme0n1 : 9.00 25466.67 99.48 0.00 0.00 0.00 0.00 0.00 00:07:21.858 [2024-11-06T12:04:03.760Z] =================================================================================================================== 00:07:21.858 [2024-11-06T12:04:03.760Z] Total : 25466.67 99.48 0.00 0.00 0.00 0.00 0.00 00:07:21.858 00:07:22.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.800 Nvme0n1 : 10.00 25485.90 99.55 0.00 0.00 0.00 0.00 0.00 00:07:22.800 [2024-11-06T12:04:04.702Z] =================================================================================================================== 00:07:22.800 [2024-11-06T12:04:04.702Z] Total : 25485.90 99.55 0.00 0.00 0.00 0.00 0.00 00:07:22.800 00:07:22.800 00:07:22.800 Latency(us) 00:07:22.800 [2024-11-06T12:04:04.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.800 Nvme0n1 : 10.00 25480.56 99.53 0.00 0.00 5019.76 2362.03 15510.19 00:07:22.800 [2024-11-06T12:04:04.702Z] =================================================================================================================== 00:07:22.800 [2024-11-06T12:04:04.702Z] Total : 25480.56 99.53 0.00 0.00 5019.76 2362.03 15510.19 00:07:22.800 { 00:07:22.800 "results": [ 00:07:22.800 { 00:07:22.800 "job": "Nvme0n1", 00:07:22.800 "core_mask": "0x2", 00:07:22.800 "workload": "randwrite", 00:07:22.800 "status": "finished", 00:07:22.800 "queue_depth": 128, 00:07:22.800 "io_size": 4096, 00:07:22.800 "runtime": 10.004648, 00:07:22.800 "iops": 25480.556637274996, 00:07:22.800 "mibps": 99.53342436435545, 00:07:22.800 "io_failed": 0, 00:07:22.800 "io_timeout": 0, 00:07:22.800 "avg_latency_us": 5019.757816238043, 00:07:22.800 "min_latency_us": 2362.0266666666666, 00:07:22.800 "max_latency_us": 15510.186666666666 00:07:22.800 } 00:07:22.800 ], 00:07:22.800 "core_count": 1 00:07:22.800 } 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1540388 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1540388 ']' 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1540388 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1540388 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1540388' 00:07:22.800 killing process with pid 1540388 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1540388 00:07:22.800 Received shutdown signal, test time was about 10.000000 seconds 00:07:22.800 00:07:22.800 Latency(us) 00:07:22.800 [2024-11-06T12:04:04.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.800 [2024-11-06T12:04:04.702Z] =================================================================================================================== 00:07:22.800 [2024-11-06T12:04:04.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1540388 00:07:22.800 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.061 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:23.321 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:23.321 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:23.583 [2024-11-06 13:04:05.408018] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:23.583 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:23.844 request: 00:07:23.844 { 00:07:23.844 "uuid": "43cbae65-0f6e-4df4-9c2c-a12a6b2462d4", 00:07:23.844 "method": "bdev_lvol_get_lvstores", 00:07:23.844 "req_id": 1 00:07:23.844 } 00:07:23.844 Got JSON-RPC error response 00:07:23.844 response: 00:07:23.844 { 00:07:23.844 "code": -19, 00:07:23.844 "message": "No such device" 00:07:23.844 } 00:07:23.844 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:23.844 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.844 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.844 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.844 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:24.105 aio_bdev 00:07:24.105 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9bcfa459-573e-470f-b061-97ff63f402f0 00:07:24.105 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=9bcfa459-573e-470f-b061-97ff63f402f0 00:07:24.105 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:24.105 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:24.105 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:24.105 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:24.105 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:24.105 13:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9bcfa459-573e-470f-b061-97ff63f402f0 -t 2000 00:07:24.366 [ 00:07:24.366 { 00:07:24.366 "name": "9bcfa459-573e-470f-b061-97ff63f402f0", 00:07:24.366 "aliases": [ 00:07:24.366 "lvs/lvol" 00:07:24.366 ], 00:07:24.366 "product_name": "Logical Volume", 00:07:24.366 "block_size": 4096, 00:07:24.366 "num_blocks": 38912, 00:07:24.366 "uuid": "9bcfa459-573e-470f-b061-97ff63f402f0", 00:07:24.366 "assigned_rate_limits": { 00:07:24.366 "rw_ios_per_sec": 0, 00:07:24.366 "rw_mbytes_per_sec": 0, 00:07:24.366 "r_mbytes_per_sec": 0, 00:07:24.366 "w_mbytes_per_sec": 0 00:07:24.366 }, 00:07:24.366 "claimed": false, 00:07:24.366 "zoned": false, 00:07:24.366 "supported_io_types": { 00:07:24.366 "read": true, 00:07:24.366 "write": true, 00:07:24.366 "unmap": true, 00:07:24.366 "flush": false, 00:07:24.366 "reset": true, 00:07:24.366 "nvme_admin": false, 00:07:24.366 "nvme_io": false, 00:07:24.366 "nvme_io_md": false, 00:07:24.366 "write_zeroes": true, 00:07:24.366 "zcopy": false, 00:07:24.366 "get_zone_info": false, 00:07:24.366 "zone_management": false, 00:07:24.366 "zone_append": false, 00:07:24.366 "compare": false, 00:07:24.366 "compare_and_write": false, 00:07:24.366 "abort": false, 00:07:24.366 "seek_hole": true, 00:07:24.366 "seek_data": true, 00:07:24.366 "copy": false, 00:07:24.366 "nvme_iov_md": false 00:07:24.366 }, 00:07:24.366 "driver_specific": { 00:07:24.366 "lvol": { 00:07:24.366 "lvol_store_uuid": "43cbae65-0f6e-4df4-9c2c-a12a6b2462d4", 00:07:24.366 "base_bdev": "aio_bdev", 00:07:24.366 "thin_provision": false, 00:07:24.366 "num_allocated_clusters": 38, 00:07:24.366 "snapshot": false, 00:07:24.366 "clone": false, 00:07:24.366 "esnap_clone": false 00:07:24.366 } 00:07:24.366 } 00:07:24.366 } 00:07:24.366 ] 00:07:24.366 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:24.366 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:24.366 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:24.626 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:24.626 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:24.626 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:24.626 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:24.626 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9bcfa459-573e-470f-b061-97ff63f402f0 00:07:24.887 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 43cbae65-0f6e-4df4-9c2c-a12a6b2462d4 00:07:25.148 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:25.148 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.148 00:07:25.148 real 0m15.874s 00:07:25.148 user 0m15.590s 00:07:25.148 sys 0m1.364s 00:07:25.148 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.148 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:25.148 ************************************ 00:07:25.148 END TEST lvs_grow_clean 00:07:25.148 ************************************ 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.409 ************************************ 00:07:25.409 START TEST lvs_grow_dirty 00:07:25.409 ************************************ 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.409 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:25.669 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:25.669 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:25.669 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=34bc6c28-628f-4140-a642-9750aae60562 00:07:25.669 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:25.669 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:25.930 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:25.930 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:25.930 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 34bc6c28-628f-4140-a642-9750aae60562 lvol 150 00:07:26.191 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cf435fd4-ef9a-4066-b9d6-da228271c6b4 00:07:26.191 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.191 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:26.191 [2024-11-06 13:04:07.997079] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:26.191 [2024-11-06 13:04:07.997122] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:26.191 true 00:07:26.191 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:26.191 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:26.452 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:26.452 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:26.452 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cf435fd4-ef9a-4066-b9d6-da228271c6b4 00:07:26.712 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:26.973 [2024-11-06 13:04:08.654972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1543705 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1543705 /var/tmp/bdevperf.sock 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1543705 ']' 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:26.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.973 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.973 [2024-11-06 13:04:08.870204] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:07:26.973 [2024-11-06 13:04:08.870254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543705 ] 00:07:27.233 [2024-11-06 13:04:08.955235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.233 [2024-11-06 13:04:08.985243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.233 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:27.233 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:27.233 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:27.494 Nvme0n1 00:07:27.494 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:27.755 [ 00:07:27.755 { 00:07:27.755 "name": "Nvme0n1", 00:07:27.755 "aliases": [ 00:07:27.755 "cf435fd4-ef9a-4066-b9d6-da228271c6b4" 00:07:27.755 ], 00:07:27.755 "product_name": "NVMe disk", 00:07:27.755 "block_size": 4096, 00:07:27.755 "num_blocks": 38912, 00:07:27.755 "uuid": "cf435fd4-ef9a-4066-b9d6-da228271c6b4", 00:07:27.755 "numa_id": 0, 00:07:27.755 "assigned_rate_limits": { 00:07:27.755 "rw_ios_per_sec": 0, 00:07:27.755 "rw_mbytes_per_sec": 0, 00:07:27.755 "r_mbytes_per_sec": 0, 00:07:27.755 "w_mbytes_per_sec": 0 00:07:27.755 }, 00:07:27.755 "claimed": false, 00:07:27.755 "zoned": false, 00:07:27.755 "supported_io_types": { 00:07:27.755 "read": true, 00:07:27.755 "write": true, 00:07:27.755 "unmap": true, 00:07:27.755 "flush": true, 00:07:27.755 "reset": true, 00:07:27.755 "nvme_admin": true, 00:07:27.755 "nvme_io": true, 00:07:27.755 "nvme_io_md": false, 00:07:27.755 "write_zeroes": true, 00:07:27.755 "zcopy": false, 00:07:27.755 "get_zone_info": false, 00:07:27.755 "zone_management": false, 00:07:27.755 "zone_append": false, 00:07:27.755 "compare": true, 00:07:27.755 "compare_and_write": true, 00:07:27.755 "abort": true, 00:07:27.755 "seek_hole": false, 00:07:27.755 "seek_data": false, 00:07:27.755 "copy": true, 00:07:27.755 "nvme_iov_md": false 00:07:27.755 }, 00:07:27.755 "memory_domains": [ 00:07:27.755 { 00:07:27.755 "dma_device_id": "system", 00:07:27.755 "dma_device_type": 1 00:07:27.755 } 00:07:27.755 ], 00:07:27.755 "driver_specific": { 00:07:27.755 "nvme": [ 00:07:27.755 { 00:07:27.755 "trid": { 00:07:27.755 "trtype": "TCP", 00:07:27.755 "adrfam": "IPv4", 00:07:27.755 "traddr": "10.0.0.2", 00:07:27.755 "trsvcid": "4420", 00:07:27.755 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:27.755 }, 00:07:27.755 "ctrlr_data": { 00:07:27.755 "cntlid": 1, 00:07:27.755 "vendor_id": "0x8086", 00:07:27.755 "model_number": "SPDK bdev Controller", 00:07:27.755 "serial_number": "SPDK0", 00:07:27.755 "firmware_revision": "25.01", 00:07:27.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:27.755 "oacs": { 00:07:27.755 "security": 0, 00:07:27.755 "format": 0, 00:07:27.755 "firmware": 0, 00:07:27.755 "ns_manage": 0 00:07:27.755 }, 00:07:27.755 "multi_ctrlr": true, 00:07:27.755 "ana_reporting": false 00:07:27.755 }, 00:07:27.755 "vs": { 00:07:27.755 "nvme_version": "1.3" 00:07:27.755 }, 00:07:27.755 "ns_data": { 00:07:27.755 "id": 1, 00:07:27.755 "can_share": true 00:07:27.755 } 00:07:27.755 } 00:07:27.755 ], 00:07:27.755 "mp_policy": "active_passive" 00:07:27.755 } 00:07:27.755 } 00:07:27.755 ] 00:07:27.755 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1543802 00:07:27.755 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:27.755 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:27.755 Running I/O for 10 seconds... 00:07:28.694 Latency(us) 00:07:28.694 [2024-11-06T12:04:10.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.694 Nvme0n1 : 1.00 24952.00 97.47 0.00 0.00 0.00 0.00 0.00 00:07:28.694 [2024-11-06T12:04:10.596Z] =================================================================================================================== 00:07:28.694 [2024-11-06T12:04:10.596Z] Total : 24952.00 97.47 0.00 0.00 0.00 0.00 0.00 00:07:28.694 00:07:29.634 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:29.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.894 Nvme0n1 : 2.00 25144.50 98.22 0.00 0.00 0.00 0.00 0.00 00:07:29.894 [2024-11-06T12:04:11.796Z] =================================================================================================================== 00:07:29.894 [2024-11-06T12:04:11.796Z] Total : 25144.50 98.22 0.00 0.00 0.00 0.00 0.00 00:07:29.894 00:07:29.894 true 00:07:29.894 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:29.894 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:30.154 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:30.154 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:30.154 13:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1543802 00:07:30.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.725 Nvme0n1 : 3.00 25256.33 98.66 0.00 0.00 0.00 0.00 0.00 00:07:30.725 [2024-11-06T12:04:12.627Z] =================================================================================================================== 00:07:30.725 [2024-11-06T12:04:12.627Z] Total : 25256.33 98.66 0.00 0.00 0.00 0.00 0.00 00:07:30.725 00:07:32.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.107 Nvme0n1 : 4.00 25325.50 98.93 0.00 0.00 0.00 0.00 0.00 00:07:32.107 [2024-11-06T12:04:14.009Z] =================================================================================================================== 00:07:32.107 [2024-11-06T12:04:14.009Z] Total : 25325.50 98.93 0.00 0.00 0.00 0.00 0.00 00:07:32.107 00:07:32.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.680 Nvme0n1 : 5.00 25375.40 99.12 0.00 0.00 0.00 0.00 0.00 00:07:32.680 [2024-11-06T12:04:14.582Z] =================================================================================================================== 00:07:32.680 [2024-11-06T12:04:14.582Z] Total : 25375.40 99.12 0.00 0.00 0.00 0.00 0.00 00:07:32.680 00:07:34.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.064 Nvme0n1 : 6.00 25411.33 99.26 0.00 0.00 0.00 0.00 0.00 00:07:34.064 [2024-11-06T12:04:15.966Z] =================================================================================================================== 00:07:34.064 [2024-11-06T12:04:15.966Z] Total : 25411.33 99.26 0.00 0.00 0.00 0.00 0.00 00:07:34.064 00:07:35.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.003 Nvme0n1 : 7.00 25437.57 99.37 0.00 0.00 0.00 0.00 0.00 00:07:35.003 [2024-11-06T12:04:16.905Z] =================================================================================================================== 00:07:35.003 [2024-11-06T12:04:16.905Z] Total : 25437.57 99.37 0.00 0.00 0.00 0.00 0.00 00:07:35.003 00:07:35.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.943 Nvme0n1 : 8.00 25457.00 99.44 0.00 0.00 0.00 0.00 0.00 00:07:35.943 [2024-11-06T12:04:17.845Z] =================================================================================================================== 00:07:35.943 [2024-11-06T12:04:17.845Z] Total : 25457.00 99.44 0.00 0.00 0.00 0.00 0.00 00:07:35.943 00:07:36.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.884 Nvme0n1 : 9.00 25471.44 99.50 0.00 0.00 0.00 0.00 0.00 00:07:36.884 [2024-11-06T12:04:18.786Z] =================================================================================================================== 00:07:36.884 [2024-11-06T12:04:18.786Z] Total : 25471.44 99.50 0.00 0.00 0.00 0.00 0.00 00:07:36.884 00:07:37.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.824 Nvme0n1 : 10.00 25483.40 99.54 0.00 0.00 0.00 0.00 0.00 00:07:37.824 [2024-11-06T12:04:19.726Z] =================================================================================================================== 00:07:37.824 [2024-11-06T12:04:19.726Z] Total : 25483.40 99.54 0.00 0.00 0.00 0.00 0.00 00:07:37.824 00:07:37.824 00:07:37.824 Latency(us) 00:07:37.824 [2024-11-06T12:04:19.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.824 Nvme0n1 : 10.00 25484.61 99.55 0.00 0.00 5019.71 2976.43 15182.51 00:07:37.824 [2024-11-06T12:04:19.726Z] =================================================================================================================== 00:07:37.824 [2024-11-06T12:04:19.726Z] Total : 25484.61 99.55 0.00 0.00 5019.71 2976.43 15182.51 00:07:37.824 { 00:07:37.824 "results": [ 00:07:37.824 { 00:07:37.824 "job": "Nvme0n1", 00:07:37.824 "core_mask": "0x2", 00:07:37.824 "workload": "randwrite", 00:07:37.824 "status": "finished", 00:07:37.824 "queue_depth": 128, 00:07:37.824 "io_size": 4096, 00:07:37.824 "runtime": 10.004547, 00:07:37.824 "iops": 25484.612146856824, 00:07:37.824 "mibps": 99.54926619865947, 00:07:37.824 "io_failed": 0, 00:07:37.824 "io_timeout": 0, 00:07:37.824 "avg_latency_us": 5019.711414459148, 00:07:37.824 "min_latency_us": 2976.4266666666667, 00:07:37.824 "max_latency_us": 15182.506666666666 00:07:37.824 } 00:07:37.824 ], 00:07:37.824 "core_count": 1 00:07:37.824 } 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1543705 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1543705 ']' 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1543705 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1543705 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1543705' 00:07:37.824 killing process with pid 1543705 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1543705 00:07:37.824 Received shutdown signal, test time was about 10.000000 seconds 00:07:37.824 00:07:37.824 Latency(us) 00:07:37.824 [2024-11-06T12:04:19.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.824 [2024-11-06T12:04:19.726Z] =================================================================================================================== 00:07:37.824 [2024-11-06T12:04:19.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:37.824 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1543705 00:07:38.084 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.084 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:38.344 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:38.344 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1539872 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1539872 00:07:38.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1539872 Killed "${NVMF_APP[@]}" "$@" 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1545912 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1545912 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1545912 ']' 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.605 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.605 [2024-11-06 13:04:20.459270] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:07:38.605 [2024-11-06 13:04:20.459331] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.865 [2024-11-06 13:04:20.550626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.865 [2024-11-06 13:04:20.581190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.865 [2024-11-06 13:04:20.581219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.865 [2024-11-06 13:04:20.581225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.865 [2024-11-06 13:04:20.581230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.865 [2024-11-06 13:04:20.581234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.865 [2024-11-06 13:04:20.581678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.435 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:39.435 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:39.435 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.436 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.436 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:39.436 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.436 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.696 [2024-11-06 13:04:21.439613] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:39.696 [2024-11-06 13:04:21.439685] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:39.696 [2024-11-06 13:04:21.439706] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:39.696 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:39.696 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cf435fd4-ef9a-4066-b9d6-da228271c6b4 00:07:39.696 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=cf435fd4-ef9a-4066-b9d6-da228271c6b4 00:07:39.696 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:39.696 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:39.696 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:39.696 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:39.696 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:39.956 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cf435fd4-ef9a-4066-b9d6-da228271c6b4 -t 2000 00:07:39.956 [ 00:07:39.956 { 00:07:39.956 "name": "cf435fd4-ef9a-4066-b9d6-da228271c6b4", 00:07:39.956 "aliases": [ 00:07:39.956 "lvs/lvol" 00:07:39.956 ], 00:07:39.956 "product_name": "Logical Volume", 00:07:39.956 "block_size": 4096, 00:07:39.956 "num_blocks": 38912, 00:07:39.956 "uuid": "cf435fd4-ef9a-4066-b9d6-da228271c6b4", 00:07:39.956 "assigned_rate_limits": { 00:07:39.956 "rw_ios_per_sec": 0, 00:07:39.956 "rw_mbytes_per_sec": 0, 00:07:39.956 "r_mbytes_per_sec": 0, 00:07:39.956 "w_mbytes_per_sec": 0 00:07:39.956 }, 00:07:39.956 "claimed": false, 00:07:39.956 "zoned": false, 00:07:39.956 "supported_io_types": { 00:07:39.957 "read": true, 00:07:39.957 "write": true, 00:07:39.957 "unmap": true, 00:07:39.957 "flush": false, 00:07:39.957 "reset": true, 00:07:39.957 "nvme_admin": false, 00:07:39.957 "nvme_io": false, 00:07:39.957 "nvme_io_md": false, 00:07:39.957 "write_zeroes": true, 00:07:39.957 "zcopy": false, 00:07:39.957 "get_zone_info": false, 00:07:39.957 "zone_management": false, 00:07:39.957 "zone_append": false, 00:07:39.957 "compare": false, 00:07:39.957 "compare_and_write": false, 00:07:39.957 "abort": false, 00:07:39.957 "seek_hole": true, 00:07:39.957 "seek_data": true, 00:07:39.957 "copy": false, 00:07:39.957 "nvme_iov_md": false 00:07:39.957 }, 00:07:39.957 "driver_specific": { 00:07:39.957 "lvol": { 00:07:39.957 "lvol_store_uuid": "34bc6c28-628f-4140-a642-9750aae60562", 00:07:39.957 "base_bdev": "aio_bdev", 00:07:39.957 "thin_provision": false, 00:07:39.957 "num_allocated_clusters": 38, 00:07:39.957 "snapshot": false, 00:07:39.957 "clone": false, 00:07:39.957 "esnap_clone": false 00:07:39.957 } 00:07:39.957 } 00:07:39.957 } 00:07:39.957 ] 00:07:39.957 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:39.957 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:39.957 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:40.217 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:40.217 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:40.217 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.478 [2024-11-06 13:04:22.268223] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:40.478 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:40.739 request: 00:07:40.739 { 00:07:40.739 "uuid": "34bc6c28-628f-4140-a642-9750aae60562", 00:07:40.739 "method": "bdev_lvol_get_lvstores", 00:07:40.739 "req_id": 1 00:07:40.739 } 00:07:40.739 Got JSON-RPC error response 00:07:40.739 response: 00:07:40.739 { 00:07:40.739 "code": -19, 00:07:40.739 "message": "No such device" 00:07:40.739 } 00:07:40.739 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:40.739 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.739 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.739 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.739 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:40.999 aio_bdev 00:07:40.999 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cf435fd4-ef9a-4066-b9d6-da228271c6b4 00:07:40.999 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=cf435fd4-ef9a-4066-b9d6-da228271c6b4 00:07:40.999 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:40.999 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:40.999 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:40.999 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:40.999 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:40.999 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cf435fd4-ef9a-4066-b9d6-da228271c6b4 -t 2000 00:07:41.260 [ 00:07:41.260 { 00:07:41.260 "name": "cf435fd4-ef9a-4066-b9d6-da228271c6b4", 00:07:41.260 "aliases": [ 00:07:41.260 "lvs/lvol" 00:07:41.260 ], 00:07:41.260 "product_name": "Logical Volume", 00:07:41.260 "block_size": 4096, 00:07:41.260 "num_blocks": 38912, 00:07:41.260 "uuid": "cf435fd4-ef9a-4066-b9d6-da228271c6b4", 00:07:41.260 "assigned_rate_limits": { 00:07:41.260 "rw_ios_per_sec": 0, 00:07:41.260 "rw_mbytes_per_sec": 0, 00:07:41.260 "r_mbytes_per_sec": 0, 00:07:41.260 "w_mbytes_per_sec": 0 00:07:41.260 }, 00:07:41.260 "claimed": false, 00:07:41.260 "zoned": false, 00:07:41.260 "supported_io_types": { 00:07:41.260 "read": true, 00:07:41.260 "write": true, 00:07:41.260 "unmap": true, 00:07:41.260 "flush": false, 00:07:41.260 "reset": true, 00:07:41.260 "nvme_admin": false, 00:07:41.260 "nvme_io": false, 00:07:41.260 "nvme_io_md": false, 00:07:41.260 "write_zeroes": true, 00:07:41.260 "zcopy": false, 00:07:41.260 "get_zone_info": false, 00:07:41.260 "zone_management": false, 00:07:41.260 "zone_append": false, 00:07:41.260 "compare": false, 00:07:41.260 "compare_and_write": false, 00:07:41.260 "abort": false, 00:07:41.260 "seek_hole": true, 00:07:41.260 "seek_data": true, 00:07:41.260 "copy": false, 00:07:41.260 "nvme_iov_md": false 00:07:41.260 }, 00:07:41.260 "driver_specific": { 00:07:41.260 "lvol": { 00:07:41.260 "lvol_store_uuid": "34bc6c28-628f-4140-a642-9750aae60562", 00:07:41.260 "base_bdev": "aio_bdev", 00:07:41.260 "thin_provision": false, 00:07:41.260 "num_allocated_clusters": 38, 00:07:41.260 "snapshot": false, 00:07:41.260 "clone": false, 00:07:41.260 "esnap_clone": false 00:07:41.260 } 00:07:41.260 } 00:07:41.260 } 00:07:41.260 ] 00:07:41.260 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:41.260 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:41.260 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:41.260 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:41.260 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:41.260 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:41.520 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:41.520 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cf435fd4-ef9a-4066-b9d6-da228271c6b4 00:07:41.780 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34bc6c28-628f-4140-a642-9750aae60562 00:07:41.780 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.040 00:07:42.040 real 0m16.751s 00:07:42.040 user 0m44.233s 00:07:42.040 sys 0m2.962s 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:42.040 ************************************ 00:07:42.040 END TEST lvs_grow_dirty 00:07:42.040 ************************************ 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:42.040 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:42.040 nvmf_trace.0 00:07:42.301 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:42.301 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:42.301 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.301 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:42.301 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.301 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:42.301 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.301 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.301 rmmod nvme_tcp 00:07:42.301 rmmod nvme_fabrics 00:07:42.301 rmmod nvme_keyring 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1545912 ']' 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1545912 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1545912 ']' 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1545912 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1545912 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1545912' 00:07:42.301 killing process with pid 1545912 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1545912 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1545912 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.301 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:42.562 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:42.562 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.562 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.562 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.562 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.562 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.562 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.562 13:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.474 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:44.474 00:07:44.474 real 0m44.184s 00:07:44.474 user 1m6.215s 00:07:44.474 sys 0m10.509s 00:07:44.474 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.474 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.474 ************************************ 00:07:44.474 END TEST nvmf_lvs_grow 00:07:44.474 ************************************ 00:07:44.474 13:04:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:44.474 13:04:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:44.474 13:04:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.474 13:04:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.474 ************************************ 00:07:44.474 START TEST nvmf_bdev_io_wait 00:07:44.474 ************************************ 00:07:44.474 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:44.736 * Looking for test storage... 00:07:44.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.736 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:44.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.736 --rc genhtml_branch_coverage=1 00:07:44.736 --rc genhtml_function_coverage=1 00:07:44.736 --rc genhtml_legend=1 00:07:44.736 --rc geninfo_all_blocks=1 00:07:44.736 --rc geninfo_unexecuted_blocks=1 00:07:44.737 00:07:44.737 ' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:44.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.737 --rc genhtml_branch_coverage=1 00:07:44.737 --rc genhtml_function_coverage=1 00:07:44.737 --rc genhtml_legend=1 00:07:44.737 --rc geninfo_all_blocks=1 00:07:44.737 --rc geninfo_unexecuted_blocks=1 00:07:44.737 00:07:44.737 ' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:44.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.737 --rc genhtml_branch_coverage=1 00:07:44.737 --rc genhtml_function_coverage=1 00:07:44.737 --rc genhtml_legend=1 00:07:44.737 --rc geninfo_all_blocks=1 00:07:44.737 --rc geninfo_unexecuted_blocks=1 00:07:44.737 00:07:44.737 ' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:44.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.737 --rc genhtml_branch_coverage=1 00:07:44.737 --rc genhtml_function_coverage=1 00:07:44.737 --rc genhtml_legend=1 00:07:44.737 --rc geninfo_all_blocks=1 00:07:44.737 --rc geninfo_unexecuted_blocks=1 00:07:44.737 00:07:44.737 ' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.737 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:52.883 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:52.883 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:52.883 Found net devices under 0000:31:00.0: cvl_0_0 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.883 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:52.884 Found net devices under 0000:31:00.1: cvl_0_1 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.884 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:07:52.884 00:07:52.884 --- 10.0.0.2 ping statistics --- 00:07:52.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.884 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:07:52.884 00:07:52.884 --- 10.0.0.1 ping statistics --- 00:07:52.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.884 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1550978 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1550978 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1550978 ']' 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:52.884 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.884 [2024-11-06 13:04:34.310975] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:07:52.884 [2024-11-06 13:04:34.311042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.884 [2024-11-06 13:04:34.412275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.884 [2024-11-06 13:04:34.467027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.884 [2024-11-06 13:04:34.467082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.884 [2024-11-06 13:04:34.467092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.884 [2024-11-06 13:04:34.467100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.884 [2024-11-06 13:04:34.467107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.884 [2024-11-06 13:04:34.469233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.884 [2024-11-06 13:04:34.469392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.884 [2024-11-06 13:04:34.469553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.884 [2024-11-06 13:04:34.469553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.458 [2024-11-06 13:04:35.260877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.458 Malloc0 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.458 [2024-11-06 13:04:35.326282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1551293 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1551295 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.458 { 00:07:53.458 "params": { 00:07:53.458 "name": "Nvme$subsystem", 00:07:53.458 "trtype": "$TEST_TRANSPORT", 00:07:53.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.458 "adrfam": "ipv4", 00:07:53.458 "trsvcid": "$NVMF_PORT", 00:07:53.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.458 "hdgst": ${hdgst:-false}, 00:07:53.458 "ddgst": ${ddgst:-false} 00:07:53.458 }, 00:07:53.458 "method": "bdev_nvme_attach_controller" 00:07:53.458 } 00:07:53.458 EOF 00:07:53.458 )") 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1551297 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1551300 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.458 { 00:07:53.458 "params": { 00:07:53.458 "name": "Nvme$subsystem", 00:07:53.458 "trtype": "$TEST_TRANSPORT", 00:07:53.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.458 "adrfam": "ipv4", 00:07:53.458 "trsvcid": "$NVMF_PORT", 00:07:53.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.458 "hdgst": ${hdgst:-false}, 00:07:53.458 "ddgst": ${ddgst:-false} 00:07:53.458 }, 00:07:53.458 "method": "bdev_nvme_attach_controller" 00:07:53.458 } 00:07:53.458 EOF 00:07:53.458 )") 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.458 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.458 { 00:07:53.458 "params": { 00:07:53.458 "name": "Nvme$subsystem", 00:07:53.458 "trtype": "$TEST_TRANSPORT", 00:07:53.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.458 "adrfam": "ipv4", 00:07:53.458 "trsvcid": "$NVMF_PORT", 00:07:53.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.458 "hdgst": ${hdgst:-false}, 00:07:53.458 "ddgst": ${ddgst:-false} 00:07:53.458 }, 00:07:53.458 "method": "bdev_nvme_attach_controller" 00:07:53.458 } 00:07:53.459 EOF 00:07:53.459 )") 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.459 { 00:07:53.459 "params": { 00:07:53.459 "name": "Nvme$subsystem", 00:07:53.459 "trtype": "$TEST_TRANSPORT", 00:07:53.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.459 "adrfam": "ipv4", 00:07:53.459 "trsvcid": "$NVMF_PORT", 00:07:53.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.459 "hdgst": ${hdgst:-false}, 00:07:53.459 "ddgst": ${ddgst:-false} 00:07:53.459 }, 00:07:53.459 "method": "bdev_nvme_attach_controller" 00:07:53.459 } 00:07:53.459 EOF 00:07:53.459 )") 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1551293 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.459 "params": { 00:07:53.459 "name": "Nvme1", 00:07:53.459 "trtype": "tcp", 00:07:53.459 "traddr": "10.0.0.2", 00:07:53.459 "adrfam": "ipv4", 00:07:53.459 "trsvcid": "4420", 00:07:53.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.459 "hdgst": false, 00:07:53.459 "ddgst": false 00:07:53.459 }, 00:07:53.459 "method": "bdev_nvme_attach_controller" 00:07:53.459 }' 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.459 "params": { 00:07:53.459 "name": "Nvme1", 00:07:53.459 "trtype": "tcp", 00:07:53.459 "traddr": "10.0.0.2", 00:07:53.459 "adrfam": "ipv4", 00:07:53.459 "trsvcid": "4420", 00:07:53.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.459 "hdgst": false, 00:07:53.459 "ddgst": false 00:07:53.459 }, 00:07:53.459 "method": "bdev_nvme_attach_controller" 00:07:53.459 }' 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.459 "params": { 00:07:53.459 "name": "Nvme1", 00:07:53.459 "trtype": "tcp", 00:07:53.459 "traddr": "10.0.0.2", 00:07:53.459 "adrfam": "ipv4", 00:07:53.459 "trsvcid": "4420", 00:07:53.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.459 "hdgst": false, 00:07:53.459 "ddgst": false 00:07:53.459 }, 00:07:53.459 "method": "bdev_nvme_attach_controller" 00:07:53.459 }' 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:53.459 13:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.459 "params": { 00:07:53.459 "name": "Nvme1", 00:07:53.459 "trtype": "tcp", 00:07:53.459 "traddr": "10.0.0.2", 00:07:53.459 "adrfam": "ipv4", 00:07:53.459 "trsvcid": "4420", 00:07:53.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.459 "hdgst": false, 00:07:53.459 "ddgst": false 00:07:53.459 }, 00:07:53.459 "method": "bdev_nvme_attach_controller" 00:07:53.459 }' 00:07:53.720 [2024-11-06 13:04:35.385452] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:07:53.720 [2024-11-06 13:04:35.385505] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:53.720 [2024-11-06 13:04:35.387575] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:07:53.720 [2024-11-06 13:04:35.387601] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:07:53.720 [2024-11-06 13:04:35.387637] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:53.720 [2024-11-06 13:04:35.387679] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:53.720 [2024-11-06 13:04:35.389142] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:07:53.720 [2024-11-06 13:04:35.389215] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:53.720 [2024-11-06 13:04:35.578073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.720 [2024-11-06 13:04:35.618381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:53.983 [2024-11-06 13:04:35.690614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.983 [2024-11-06 13:04:35.731826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:53.983 [2024-11-06 13:04:35.740117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.983 [2024-11-06 13:04:35.778151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:53.983 [2024-11-06 13:04:35.833054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.983 [2024-11-06 13:04:35.876532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.245 Running I/O for 1 seconds... 00:07:54.245 Running I/O for 1 seconds... 00:07:54.245 Running I/O for 1 seconds... 00:07:54.506 Running I/O for 1 seconds... 00:07:55.079 7702.00 IOPS, 30.09 MiB/s 00:07:55.079 Latency(us) 00:07:55.079 [2024-11-06T12:04:36.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.079 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:55.079 Nvme1n1 : 1.02 7701.07 30.08 0.00 0.00 16457.83 7536.64 23811.41 00:07:55.079 [2024-11-06T12:04:36.981Z] =================================================================================================================== 00:07:55.079 [2024-11-06T12:04:36.981Z] Total : 7701.07 30.08 0.00 0.00 16457.83 7536.64 23811.41 00:07:55.340 7594.00 IOPS, 29.66 MiB/s 00:07:55.340 Latency(us) 00:07:55.340 [2024-11-06T12:04:37.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.340 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:55.340 Nvme1n1 : 1.01 7703.42 30.09 0.00 0.00 16565.19 4478.29 32768.00 00:07:55.340 [2024-11-06T12:04:37.242Z] =================================================================================================================== 00:07:55.340 [2024-11-06T12:04:37.242Z] Total : 7703.42 30.09 0.00 0.00 16565.19 4478.29 32768.00 00:07:55.340 187800.00 IOPS, 733.59 MiB/s 00:07:55.340 Latency(us) 00:07:55.340 [2024-11-06T12:04:37.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.340 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:55.340 Nvme1n1 : 1.00 187377.03 731.94 0.00 0.00 679.04 302.08 2239.15 00:07:55.340 [2024-11-06T12:04:37.242Z] =================================================================================================================== 00:07:55.340 [2024-11-06T12:04:37.242Z] Total : 187377.03 731.94 0.00 0.00 679.04 302.08 2239.15 00:07:55.340 10748.00 IOPS, 41.98 MiB/s 00:07:55.340 Latency(us) 00:07:55.340 [2024-11-06T12:04:37.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.340 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:55.340 Nvme1n1 : 1.01 10820.55 42.27 0.00 0.00 11789.31 5434.03 23046.83 00:07:55.340 [2024-11-06T12:04:37.242Z] =================================================================================================================== 00:07:55.340 [2024-11-06T12:04:37.242Z] Total : 10820.55 42.27 0.00 0.00 11789.31 5434.03 23046.83 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1551295 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1551297 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1551300 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.602 rmmod nvme_tcp 00:07:55.602 rmmod nvme_fabrics 00:07:55.602 rmmod nvme_keyring 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1550978 ']' 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1550978 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1550978 ']' 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1550978 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1550978 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1550978' 00:07:55.602 killing process with pid 1550978 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1550978 00:07:55.602 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1550978 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.864 13:04:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.780 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.040 00:07:58.040 real 0m13.317s 00:07:58.040 user 0m20.177s 00:07:58.040 sys 0m7.526s 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:58.040 ************************************ 00:07:58.040 END TEST nvmf_bdev_io_wait 00:07:58.040 ************************************ 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.040 ************************************ 00:07:58.040 START TEST nvmf_queue_depth 00:07:58.040 ************************************ 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:58.040 * Looking for test storage... 00:07:58.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:58.040 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:58.301 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:58.301 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.301 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.302 --rc genhtml_branch_coverage=1 00:07:58.302 --rc genhtml_function_coverage=1 00:07:58.302 --rc genhtml_legend=1 00:07:58.302 --rc geninfo_all_blocks=1 00:07:58.302 --rc geninfo_unexecuted_blocks=1 00:07:58.302 00:07:58.302 ' 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.302 --rc genhtml_branch_coverage=1 00:07:58.302 --rc genhtml_function_coverage=1 00:07:58.302 --rc genhtml_legend=1 00:07:58.302 --rc geninfo_all_blocks=1 00:07:58.302 --rc geninfo_unexecuted_blocks=1 00:07:58.302 00:07:58.302 ' 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.302 --rc genhtml_branch_coverage=1 00:07:58.302 --rc genhtml_function_coverage=1 00:07:58.302 --rc genhtml_legend=1 00:07:58.302 --rc geninfo_all_blocks=1 00:07:58.302 --rc geninfo_unexecuted_blocks=1 00:07:58.302 00:07:58.302 ' 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:58.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.302 --rc genhtml_branch_coverage=1 00:07:58.302 --rc genhtml_function_coverage=1 00:07:58.302 --rc genhtml_legend=1 00:07:58.302 --rc geninfo_all_blocks=1 00:07:58.302 --rc geninfo_unexecuted_blocks=1 00:07:58.302 00:07:58.302 ' 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.302 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.302 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.302 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:58.302 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:58.302 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.303 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.605 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:06.606 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:06.606 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:06.606 Found net devices under 0000:31:00.0: cvl_0_0 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:06.606 Found net devices under 0000:31:00.1: cvl_0_1 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.708 ms 00:08:06.606 00:08:06.606 --- 10.0.0.2 ping statistics --- 00:08:06.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.606 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:08:06.606 00:08:06.606 --- 10.0.0.1 ping statistics --- 00:08:06.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.606 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1556044 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1556044 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1556044 ']' 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.606 13:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.606 [2024-11-06 13:04:47.649696] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:08:06.606 [2024-11-06 13:04:47.649771] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.606 [2024-11-06 13:04:47.752336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.606 [2024-11-06 13:04:47.802246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.607 [2024-11-06 13:04:47.802298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.607 [2024-11-06 13:04:47.802306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.607 [2024-11-06 13:04:47.802313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.607 [2024-11-06 13:04:47.802319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.607 [2024-11-06 13:04:47.803147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.607 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.607 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:06.607 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.607 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.607 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.607 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.607 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.607 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.607 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 [2024-11-06 13:04:48.509017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 Malloc0 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 [2024-11-06 13:04:48.570247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1556231 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1556231 /var/tmp/bdevperf.sock 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1556231 ']' 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:06.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.869 13:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 [2024-11-06 13:04:48.627529] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:08:06.869 [2024-11-06 13:04:48.627593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556231 ] 00:08:06.869 [2024-11-06 13:04:48.720366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.130 [2024-11-06 13:04:48.774521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.703 13:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.703 13:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:07.703 13:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:07.703 13:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.703 13:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.703 NVMe0n1 00:08:07.703 13:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.703 13:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:07.964 Running I/O for 10 seconds... 00:08:09.850 8374.00 IOPS, 32.71 MiB/s [2024-11-06T12:04:52.694Z] 9786.00 IOPS, 38.23 MiB/s [2024-11-06T12:04:54.084Z] 10528.00 IOPS, 41.12 MiB/s [2024-11-06T12:04:54.654Z] 10981.00 IOPS, 42.89 MiB/s [2024-11-06T12:04:56.039Z] 11427.80 IOPS, 44.64 MiB/s [2024-11-06T12:04:56.980Z] 11741.17 IOPS, 45.86 MiB/s [2024-11-06T12:04:57.922Z] 11994.14 IOPS, 46.85 MiB/s [2024-11-06T12:04:58.863Z] 12125.00 IOPS, 47.36 MiB/s [2024-11-06T12:04:59.805Z] 12269.78 IOPS, 47.93 MiB/s [2024-11-06T12:04:59.805Z] 12391.20 IOPS, 48.40 MiB/s 00:08:17.903 Latency(us) 00:08:17.903 [2024-11-06T12:04:59.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.903 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:17.903 Verification LBA range: start 0x0 length 0x4000 00:08:17.903 NVMe0n1 : 10.06 12410.00 48.48 0.00 0.00 82256.06 24903.68 77769.39 00:08:17.903 [2024-11-06T12:04:59.805Z] =================================================================================================================== 00:08:17.903 [2024-11-06T12:04:59.805Z] Total : 12410.00 48.48 0.00 0.00 82256.06 24903.68 77769.39 00:08:17.903 { 00:08:17.903 "results": [ 00:08:17.903 { 00:08:17.903 "job": "NVMe0n1", 00:08:17.903 "core_mask": "0x1", 00:08:17.903 "workload": "verify", 00:08:17.903 "status": "finished", 00:08:17.903 "verify_range": { 00:08:17.903 "start": 0, 00:08:17.903 "length": 16384 00:08:17.903 }, 00:08:17.903 "queue_depth": 1024, 00:08:17.903 "io_size": 4096, 00:08:17.903 "runtime": 10.064947, 00:08:17.903 "iops": 12410.000768011992, 00:08:17.903 "mibps": 48.476565500046846, 00:08:17.903 "io_failed": 0, 00:08:17.903 "io_timeout": 0, 00:08:17.903 "avg_latency_us": 82256.0596177392, 00:08:17.903 "min_latency_us": 24903.68, 00:08:17.903 "max_latency_us": 77769.38666666667 00:08:17.903 } 00:08:17.903 ], 00:08:17.903 "core_count": 1 00:08:17.903 } 00:08:17.903 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1556231 00:08:17.903 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1556231 ']' 00:08:17.903 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1556231 00:08:17.903 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:17.903 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.903 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1556231 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1556231' 00:08:18.165 killing process with pid 1556231 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1556231 00:08:18.165 Received shutdown signal, test time was about 10.000000 seconds 00:08:18.165 00:08:18.165 Latency(us) 00:08:18.165 [2024-11-06T12:05:00.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.165 [2024-11-06T12:05:00.067Z] =================================================================================================================== 00:08:18.165 [2024-11-06T12:05:00.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1556231 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.165 rmmod nvme_tcp 00:08:18.165 rmmod nvme_fabrics 00:08:18.165 rmmod nvme_keyring 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1556044 ']' 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1556044 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1556044 ']' 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1556044 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:18.165 13:04:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1556044 00:08:18.165 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:18.165 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:18.165 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1556044' 00:08:18.165 killing process with pid 1556044 00:08:18.165 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1556044 00:08:18.165 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1556044 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.426 13:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.339 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.339 00:08:20.339 real 0m22.467s 00:08:20.339 user 0m25.596s 00:08:20.339 sys 0m7.123s 00:08:20.339 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.339 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.339 ************************************ 00:08:20.339 END TEST nvmf_queue_depth 00:08:20.339 ************************************ 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.600 ************************************ 00:08:20.600 START TEST nvmf_target_multipath 00:08:20.600 ************************************ 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:20.600 * Looking for test storage... 00:08:20.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.600 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:20.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.863 --rc genhtml_branch_coverage=1 00:08:20.863 --rc genhtml_function_coverage=1 00:08:20.863 --rc genhtml_legend=1 00:08:20.863 --rc geninfo_all_blocks=1 00:08:20.863 --rc geninfo_unexecuted_blocks=1 00:08:20.863 00:08:20.863 ' 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:20.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.863 --rc genhtml_branch_coverage=1 00:08:20.863 --rc genhtml_function_coverage=1 00:08:20.863 --rc genhtml_legend=1 00:08:20.863 --rc geninfo_all_blocks=1 00:08:20.863 --rc geninfo_unexecuted_blocks=1 00:08:20.863 00:08:20.863 ' 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:20.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.863 --rc genhtml_branch_coverage=1 00:08:20.863 --rc genhtml_function_coverage=1 00:08:20.863 --rc genhtml_legend=1 00:08:20.863 --rc geninfo_all_blocks=1 00:08:20.863 --rc geninfo_unexecuted_blocks=1 00:08:20.863 00:08:20.863 ' 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:20.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.863 --rc genhtml_branch_coverage=1 00:08:20.863 --rc genhtml_function_coverage=1 00:08:20.863 --rc genhtml_legend=1 00:08:20.863 --rc geninfo_all_blocks=1 00:08:20.863 --rc geninfo_unexecuted_blocks=1 00:08:20.863 00:08:20.863 ' 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.863 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.864 13:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.007 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:29.008 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:29.008 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:29.008 Found net devices under 0000:31:00.0: cvl_0_0 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:29.008 Found net devices under 0000:31:00.1: cvl_0_1 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:29.008 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:29.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:08:29.008 00:08:29.008 --- 10.0.0.2 ping statistics --- 00:08:29.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.008 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:08:29.008 00:08:29.008 --- 10.0.0.1 ping statistics --- 00:08:29.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.008 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:29.008 only one NIC for nvmf test 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.008 rmmod nvme_tcp 00:08:29.008 rmmod nvme_fabrics 00:08:29.008 rmmod nvme_keyring 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:29.008 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.009 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.926 00:08:30.926 real 0m10.047s 00:08:30.926 user 0m2.141s 00:08:30.926 sys 0m5.838s 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 ************************************ 00:08:30.926 END TEST nvmf_target_multipath 00:08:30.926 ************************************ 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.926 ************************************ 00:08:30.926 START TEST nvmf_zcopy 00:08:30.926 ************************************ 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:30.926 * Looking for test storage... 00:08:30.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:30.926 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:30.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.927 --rc genhtml_branch_coverage=1 00:08:30.927 --rc genhtml_function_coverage=1 00:08:30.927 --rc genhtml_legend=1 00:08:30.927 --rc geninfo_all_blocks=1 00:08:30.927 --rc geninfo_unexecuted_blocks=1 00:08:30.927 00:08:30.927 ' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:30.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.927 --rc genhtml_branch_coverage=1 00:08:30.927 --rc genhtml_function_coverage=1 00:08:30.927 --rc genhtml_legend=1 00:08:30.927 --rc geninfo_all_blocks=1 00:08:30.927 --rc geninfo_unexecuted_blocks=1 00:08:30.927 00:08:30.927 ' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:30.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.927 --rc genhtml_branch_coverage=1 00:08:30.927 --rc genhtml_function_coverage=1 00:08:30.927 --rc genhtml_legend=1 00:08:30.927 --rc geninfo_all_blocks=1 00:08:30.927 --rc geninfo_unexecuted_blocks=1 00:08:30.927 00:08:30.927 ' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:30.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.927 --rc genhtml_branch_coverage=1 00:08:30.927 --rc genhtml_function_coverage=1 00:08:30.927 --rc genhtml_legend=1 00:08:30.927 --rc geninfo_all_blocks=1 00:08:30.927 --rc geninfo_unexecuted_blocks=1 00:08:30.927 00:08:30.927 ' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.927 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.928 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.928 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.928 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.073 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:39.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:39.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:39.074 Found net devices under 0000:31:00.0: cvl_0_0 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:39.074 Found net devices under 0000:31:00.1: cvl_0_1 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.074 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.728 ms 00:08:39.074 00:08:39.074 --- 10.0.0.2 ping statistics --- 00:08:39.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.074 rtt min/avg/max/mdev = 0.728/0.728/0.728/0.000 ms 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:08:39.074 00:08:39.074 --- 10.0.0.1 ping statistics --- 00:08:39.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.074 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1567168 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1567168 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1567168 ']' 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.074 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.075 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.075 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.075 [2024-11-06 13:05:20.399382] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:08:39.075 [2024-11-06 13:05:20.399458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.075 [2024-11-06 13:05:20.497505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.075 [2024-11-06 13:05:20.547404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.075 [2024-11-06 13:05:20.547454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.075 [2024-11-06 13:05:20.547462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.075 [2024-11-06 13:05:20.547469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.075 [2024-11-06 13:05:20.547476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.075 [2024-11-06 13:05:20.548267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.336 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:39.336 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:39.336 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.336 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.336 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.597 [2024-11-06 13:05:21.250126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.597 [2024-11-06 13:05:21.274396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.597 malloc0 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.597 { 00:08:39.597 "params": { 00:08:39.597 "name": "Nvme$subsystem", 00:08:39.597 "trtype": "$TEST_TRANSPORT", 00:08:39.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.597 "adrfam": "ipv4", 00:08:39.597 "trsvcid": "$NVMF_PORT", 00:08:39.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.597 "hdgst": ${hdgst:-false}, 00:08:39.597 "ddgst": ${ddgst:-false} 00:08:39.597 }, 00:08:39.597 "method": "bdev_nvme_attach_controller" 00:08:39.597 } 00:08:39.597 EOF 00:08:39.597 )") 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:39.597 13:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.597 "params": { 00:08:39.597 "name": "Nvme1", 00:08:39.597 "trtype": "tcp", 00:08:39.597 "traddr": "10.0.0.2", 00:08:39.597 "adrfam": "ipv4", 00:08:39.597 "trsvcid": "4420", 00:08:39.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:39.597 "hdgst": false, 00:08:39.597 "ddgst": false 00:08:39.597 }, 00:08:39.597 "method": "bdev_nvme_attach_controller" 00:08:39.597 }' 00:08:39.597 [2024-11-06 13:05:21.374566] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:08:39.597 [2024-11-06 13:05:21.374629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567205 ] 00:08:39.597 [2024-11-06 13:05:21.458743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.858 [2024-11-06 13:05:21.511574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.119 Running I/O for 10 seconds... 00:08:42.004 7240.00 IOPS, 56.56 MiB/s [2024-11-06T12:05:25.289Z] 8467.00 IOPS, 66.15 MiB/s [2024-11-06T12:05:25.861Z] 8893.00 IOPS, 69.48 MiB/s [2024-11-06T12:05:27.244Z] 9108.50 IOPS, 71.16 MiB/s [2024-11-06T12:05:28.183Z] 9234.60 IOPS, 72.15 MiB/s [2024-11-06T12:05:29.125Z] 9322.33 IOPS, 72.83 MiB/s [2024-11-06T12:05:30.066Z] 9381.86 IOPS, 73.30 MiB/s [2024-11-06T12:05:31.009Z] 9428.75 IOPS, 73.66 MiB/s [2024-11-06T12:05:31.950Z] 9464.44 IOPS, 73.94 MiB/s [2024-11-06T12:05:31.950Z] 9493.40 IOPS, 74.17 MiB/s 00:08:50.048 Latency(us) 00:08:50.048 [2024-11-06T12:05:31.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:50.049 Verification LBA range: start 0x0 length 0x1000 00:08:50.049 Nvme1n1 : 10.01 9495.85 74.19 0.00 0.00 13432.34 1092.27 28398.93 00:08:50.049 [2024-11-06T12:05:31.951Z] =================================================================================================================== 00:08:50.049 [2024-11-06T12:05:31.951Z] Total : 9495.85 74.19 0.00 0.00 13432.34 1092.27 28398.93 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1569377 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:50.310 { 00:08:50.310 "params": { 00:08:50.310 "name": "Nvme$subsystem", 00:08:50.310 "trtype": "$TEST_TRANSPORT", 00:08:50.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.310 "adrfam": "ipv4", 00:08:50.310 "trsvcid": "$NVMF_PORT", 00:08:50.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.310 "hdgst": ${hdgst:-false}, 00:08:50.310 "ddgst": ${ddgst:-false} 00:08:50.310 }, 00:08:50.310 "method": "bdev_nvme_attach_controller" 00:08:50.310 } 00:08:50.310 EOF 00:08:50.310 )") 00:08:50.310 [2024-11-06 13:05:31.982156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:31.982185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:50.310 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:50.310 "params": { 00:08:50.310 "name": "Nvme1", 00:08:50.310 "trtype": "tcp", 00:08:50.310 "traddr": "10.0.0.2", 00:08:50.310 "adrfam": "ipv4", 00:08:50.310 "trsvcid": "4420", 00:08:50.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.310 "hdgst": false, 00:08:50.310 "ddgst": false 00:08:50.310 }, 00:08:50.310 "method": "bdev_nvme_attach_controller" 00:08:50.310 }' 00:08:50.310 [2024-11-06 13:05:31.994155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:31.994166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.006184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.006194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.018216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.018224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.026270] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:08:50.310 [2024-11-06 13:05:32.026318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569377 ] 00:08:50.310 [2024-11-06 13:05:32.030247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.030255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.042276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.042285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.054307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.054315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.066337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.066346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.078368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.078377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.090400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.090408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.102432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.102441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.109476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.310 [2024-11-06 13:05:32.114465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.114473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.126494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.126503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.138526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.138541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.139158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.310 [2024-11-06 13:05:32.150562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.150572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.162593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.162606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.174619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.174631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.186649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.186660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.198678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.198687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.310 [2024-11-06 13:05:32.210724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.310 [2024-11-06 13:05:32.210741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.222754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.222766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.234785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.234795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.246812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.246820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.258842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.258850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.270872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.270880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.282904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.282914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.294933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.294943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.306964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.306972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.318993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.319001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.331027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.331037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.343056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.343064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.355088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.355099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.367120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.367127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.379152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.379161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.391182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.391190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.403212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.403220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.415252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.415263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.427286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.427302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 Running I/O for 5 seconds... 00:08:50.572 [2024-11-06 13:05:32.439310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.439319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.454084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.454101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.572 [2024-11-06 13:05:32.467572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.572 [2024-11-06 13:05:32.467589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.832 [2024-11-06 13:05:32.480438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.832 [2024-11-06 13:05:32.480455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.832 [2024-11-06 13:05:32.493015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.493032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.506083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.506099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.519288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.519303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.532133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.532148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.545582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.545598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.559131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.559147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.572352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.572367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.586102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.586118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.599309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.599328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.612381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.612397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.625679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.625695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.638821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.638837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.652239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.652256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.665663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.665679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.678890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.678906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.691638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.691654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.704102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.704117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.717537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.717553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.833 [2024-11-06 13:05:32.730984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.833 [2024-11-06 13:05:32.731000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.743973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.743989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.756573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.756589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.769878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.769893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.782528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.782544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.795050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.795066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.807690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.807706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.821054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.821069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.833800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.833815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.846166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.846185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.859199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.859215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.872946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.872961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.885848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.885863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.899378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.899394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.912888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.912903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.925823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.925838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.938543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.938558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.951330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.951346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.965128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.965143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.978918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.978933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.093 [2024-11-06 13:05:32.991293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.093 [2024-11-06 13:05:32.991308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.004840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.004856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.017919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.017934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.031237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.031254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.044740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.044760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.058406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.058422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.071273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.071289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.083703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.083718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.096664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.096680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.110196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.110212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.123407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.123422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.136303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.136318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.148758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.148774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.161275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.161290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.174397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.353 [2024-11-06 13:05:33.174412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.353 [2024-11-06 13:05:33.187820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.354 [2024-11-06 13:05:33.187836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.354 [2024-11-06 13:05:33.201466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.354 [2024-11-06 13:05:33.201482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.354 [2024-11-06 13:05:33.214710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.354 [2024-11-06 13:05:33.214726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.354 [2024-11-06 13:05:33.228334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.354 [2024-11-06 13:05:33.228350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.354 [2024-11-06 13:05:33.241999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.354 [2024-11-06 13:05:33.242015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.254875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.254891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.267346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.267361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.280695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.280710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.293942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.293958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.307704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.307720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.320790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.320806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.333510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.333526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.347088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.347105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.361065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.361080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.373688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.373704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.387078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.387094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.614 [2024-11-06 13:05:33.399895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.614 [2024-11-06 13:05:33.399911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.615 [2024-11-06 13:05:33.413275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.615 [2024-11-06 13:05:33.413290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.615 [2024-11-06 13:05:33.426235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.615 [2024-11-06 13:05:33.426251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.615 [2024-11-06 13:05:33.439666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.615 [2024-11-06 13:05:33.439682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.615 19043.00 IOPS, 148.77 MiB/s [2024-11-06T12:05:33.517Z] [2024-11-06 13:05:33.452700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.615 [2024-11-06 13:05:33.452715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.615 [2024-11-06 13:05:33.465249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.615 [2024-11-06 13:05:33.465266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.615 [2024-11-06 13:05:33.477536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.615 [2024-11-06 13:05:33.477552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.615 [2024-11-06 13:05:33.490347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.615 [2024-11-06 13:05:33.490363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.615 [2024-11-06 13:05:33.503681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.615 [2024-11-06 13:05:33.503697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.516317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.516334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.529220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.529236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.542584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.542600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.556029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.556044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.569347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.569363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.583071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.583086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.596519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.596534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.609651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.609666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.622734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.622754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.635889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.635905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.648864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.648879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.662134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.662150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.675571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.675586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.688272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.688287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.700952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.700968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.713832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.713847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.726955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.726971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.739226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.739242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.752493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.752508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.877 [2024-11-06 13:05:33.765572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.877 [2024-11-06 13:05:33.765587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.778125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.778142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.790907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.790923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.804026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.804041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.817267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.817282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.830631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.830651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.844440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.844456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.857752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.857767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.871143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.871158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.884463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.884479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.897647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.897662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.911178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.911194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.924777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.924792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.937960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.937976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.951476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.951492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.964010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.964026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.977791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.977807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:33.990427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:33.990442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:34.003109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:34.003125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.138 [2024-11-06 13:05:34.016152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.138 [2024-11-06 13:05:34.016167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.139 [2024-11-06 13:05:34.029112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.139 [2024-11-06 13:05:34.029127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.042546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.042562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.055371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.055386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.068171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.068187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.081452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.081471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.094985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.095000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.108418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.108433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.122018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.122033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.135161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.135176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.148560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.148576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.161377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.161392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.174372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.174387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.188073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.188089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.200920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.200936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.213416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.213431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.226435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.226450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.239082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.239098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.252014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.252030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.265228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.265243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.278079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.278095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.400 [2024-11-06 13:05:34.291666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.400 [2024-11-06 13:05:34.291681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.304256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.304272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.316742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.316762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.330500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.330519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.343035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.343050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.355765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.355781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.368770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.368785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.382118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.382132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.395344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.395359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.408506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.408521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.421336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.421351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.434215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.434230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 19126.00 IOPS, 149.42 MiB/s [2024-11-06T12:05:34.564Z] [2024-11-06 13:05:34.446965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.446981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.459465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.459480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.472950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.472965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.486420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.486435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.499520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.499535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.512239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.512255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.525211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.525226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.538719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.538734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.662 [2024-11-06 13:05:34.551850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.662 [2024-11-06 13:05:34.551865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.564312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.564327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.577188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.577203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.590319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.590334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.604041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.604056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.617766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.617781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.630840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.630855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.643510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.643525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.657027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.657043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.670368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.670383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.683245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.683261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.696887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.696902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.710304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.710320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.723280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.723296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.736675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.736690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.750242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.750257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.763362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.763377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.776914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.776929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.790247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.790263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.803896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.803912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.923 [2024-11-06 13:05:34.817360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.923 [2024-11-06 13:05:34.817376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.183 [2024-11-06 13:05:34.830230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.183 [2024-11-06 13:05:34.830246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.842765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.842781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.856682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.856697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.869514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.869529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.882182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.882198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.895698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.895713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.908017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.908032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.920716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.920731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.933987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.934002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.946747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.946763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.959256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.959271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.972935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.972950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.986046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.986061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:34.998518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:34.998534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:35.011395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:35.011410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:35.024614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:35.024630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:35.037819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:35.037834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:35.051332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:35.051348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:35.064827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:35.064842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.184 [2024-11-06 13:05:35.077993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.184 [2024-11-06 13:05:35.078009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.091594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.091611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.104695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.104711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.118310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.118326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.131959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.131975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.145592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.145607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.158118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.158134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.171507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.171522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.185133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.185148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.198549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.198565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.212211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.212226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.225647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.225662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.238298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.238313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.251448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.251464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.264477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.264492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.278308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.278323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.291132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.291147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.304741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.304761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.318258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.318273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.331656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.331671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.445 [2024-11-06 13:05:35.345117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.445 [2024-11-06 13:05:35.345133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.706 [2024-11-06 13:05:35.357905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.357920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.370908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.370924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.383576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.383591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.397025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.397041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.409802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.409818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.422494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.422509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.434922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.434938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 19125.67 IOPS, 149.42 MiB/s [2024-11-06T12:05:35.609Z] [2024-11-06 13:05:35.447539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.447554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.460828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.460845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.474590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.474605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.488356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.488372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.501867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.501883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.514809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.514825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.528554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.528570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.541214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.541229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.553714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.553729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.566518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.566537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.579441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.579456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.592163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.592179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.707 [2024-11-06 13:05:35.605392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.707 [2024-11-06 13:05:35.605408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.617713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.617729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.630004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.630019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.642846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.642862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.655532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.655547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.668050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.668066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.680888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.680903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.693847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.693863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.706394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.706409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.719148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.719164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.732504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.732521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.745634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.745649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.758580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.758596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.772330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.772345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.785740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.785760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.798799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.798815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.811823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.811842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.825258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.825273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.837807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.837823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.850399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.850414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.968 [2024-11-06 13:05:35.863658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.968 [2024-11-06 13:05:35.863672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.876399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.876414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.889647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.889662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.902749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.902764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.916337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.916352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.929957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.929972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.943328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.943343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.956059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.956074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.968656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.968671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.981149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.981164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:35.993673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:35.993688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.006271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.006286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.019018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.019033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.031912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.031928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.045095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.045110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.058597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.058616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.071817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.071832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.085377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.085393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.098554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.098569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.110990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.111005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.229 [2024-11-06 13:05:36.124053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.229 [2024-11-06 13:05:36.124068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.490 [2024-11-06 13:05:36.137530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.490 [2024-11-06 13:05:36.137546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.490 [2024-11-06 13:05:36.151444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.490 [2024-11-06 13:05:36.151460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.490 [2024-11-06 13:05:36.164923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.490 [2024-11-06 13:05:36.164938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.490 [2024-11-06 13:05:36.178417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.490 [2024-11-06 13:05:36.178433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.490 [2024-11-06 13:05:36.191823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.490 [2024-11-06 13:05:36.191839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.490 [2024-11-06 13:05:36.204713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.490 [2024-11-06 13:05:36.204729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.490 [2024-11-06 13:05:36.217282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.217297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.230461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.230476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.243351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.243366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.256546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.256561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.269020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.269034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.282262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.282277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.294912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.294927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.308189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.308205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.321772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.321787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.334515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.334530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.347153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.347168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.360454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.360469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.373460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.373475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.491 [2024-11-06 13:05:36.385813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.491 [2024-11-06 13:05:36.385828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.398907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.398923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.412501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.412516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.425923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.425938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.438501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.438517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 19154.25 IOPS, 149.64 MiB/s [2024-11-06T12:05:36.654Z] [2024-11-06 13:05:36.451819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.451835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.465407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.465422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.478353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.478368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.491823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.491838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.505370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.505385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.518483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.518499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.531864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.531880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.544456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.544471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.557822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.557837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.571599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.571614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.584278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.584293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.597636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.597651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.611165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.611180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.624285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.624300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.637914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.637929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.752 [2024-11-06 13:05:36.651214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.752 [2024-11-06 13:05:36.651230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.664581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.664597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.678052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.678067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.690950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.690966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.703210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.703227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.716920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.716936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.730310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.730325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.743185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.743201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.756237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.756252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.769639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.769655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.782487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.782503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.794930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.794953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.807840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.807856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.820910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.820925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.834147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.834163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.847018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.847034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.860154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.860169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.873345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.873361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.886183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.886199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.899604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.899619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.014 [2024-11-06 13:05:36.912711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.014 [2024-11-06 13:05:36.912727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:36.926130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:36.926146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:36.939344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:36.939359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:36.952877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:36.952893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:36.965909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:36.965925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:36.978514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:36.978529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:36.991667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:36.991682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.004146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.004161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.016853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.016869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.029548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.029564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.043544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.043563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.056198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.056214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.068986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.069002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.082146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.082162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.095620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.095635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.109188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.109204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.123205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.123220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.136867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.136882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.149276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.149292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.162215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.162230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.276 [2024-11-06 13:05:37.175104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.276 [2024-11-06 13:05:37.175119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.536 [2024-11-06 13:05:37.188013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.188030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.201449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.201465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.214876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.214893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.228533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.228549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.242246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.242262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.255125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.255140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.267638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.267653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.281204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.281219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.294734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.294760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.308307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.308323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.321722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.321738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.334840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.334856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.347394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.347410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.360106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.360122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.373515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.373530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.386955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.386970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.399455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.399470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.411969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.411985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.425061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.425076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.537 [2024-11-06 13:05:37.437524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.537 [2024-11-06 13:05:37.437540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 19163.80 IOPS, 149.72 MiB/s [2024-11-06T12:05:37.700Z] [2024-11-06 13:05:37.450080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.450095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 00:08:55.798 Latency(us) 00:08:55.798 [2024-11-06T12:05:37.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.798 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:55.798 Nvme1n1 : 5.01 19166.89 149.74 0.00 0.00 6672.69 3003.73 16602.45 00:08:55.798 [2024-11-06T12:05:37.700Z] =================================================================================================================== 00:08:55.798 [2024-11-06T12:05:37.700Z] Total : 19166.89 149.74 0.00 0.00 6672.69 3003.73 16602.45 00:08:55.798 [2024-11-06 13:05:37.459624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.459639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 [2024-11-06 13:05:37.471654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.471666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 [2024-11-06 13:05:37.483689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.483701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 [2024-11-06 13:05:37.495717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.495728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 [2024-11-06 13:05:37.507743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.507759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 [2024-11-06 13:05:37.519774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.519785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 [2024-11-06 13:05:37.531800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.531808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 [2024-11-06 13:05:37.543831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.543840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 [2024-11-06 13:05:37.555859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.798 [2024-11-06 13:05:37.555868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1569377) - No such process 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1569377 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.798 delay0 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.798 13:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:55.798 [2024-11-06 13:05:37.681091] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:04.023 Initializing NVMe Controllers 00:09:04.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:04.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:04.023 Initialization complete. Launching workers. 00:09:04.023 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 231, failed: 35410 00:09:04.023 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 35521, failed to submit 120 00:09:04.023 success 35445, unsuccessful 76, failed 0 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.023 rmmod nvme_tcp 00:09:04.023 rmmod nvme_fabrics 00:09:04.023 rmmod nvme_keyring 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1567168 ']' 00:09:04.023 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1567168 00:09:04.024 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1567168 ']' 00:09:04.024 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1567168 00:09:04.024 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:04.024 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:04.024 13:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1567168 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1567168' 00:09:04.024 killing process with pid 1567168 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1567168 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1567168 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.024 13:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.408 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.408 00:09:05.408 real 0m34.775s 00:09:05.408 user 0m45.715s 00:09:05.408 sys 0m11.953s 00:09:05.408 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.408 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.408 ************************************ 00:09:05.408 END TEST nvmf_zcopy 00:09:05.408 ************************************ 00:09:05.408 13:05:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:05.408 13:05:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:05.408 13:05:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.408 13:05:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.408 ************************************ 00:09:05.408 START TEST nvmf_nmic 00:09:05.408 ************************************ 00:09:05.408 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:05.670 * Looking for test storage... 00:09:05.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.670 --rc genhtml_branch_coverage=1 00:09:05.670 --rc genhtml_function_coverage=1 00:09:05.670 --rc genhtml_legend=1 00:09:05.670 --rc geninfo_all_blocks=1 00:09:05.670 --rc geninfo_unexecuted_blocks=1 00:09:05.670 00:09:05.670 ' 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.670 --rc genhtml_branch_coverage=1 00:09:05.670 --rc genhtml_function_coverage=1 00:09:05.670 --rc genhtml_legend=1 00:09:05.670 --rc geninfo_all_blocks=1 00:09:05.670 --rc geninfo_unexecuted_blocks=1 00:09:05.670 00:09:05.670 ' 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.670 --rc genhtml_branch_coverage=1 00:09:05.670 --rc genhtml_function_coverage=1 00:09:05.670 --rc genhtml_legend=1 00:09:05.670 --rc geninfo_all_blocks=1 00:09:05.670 --rc geninfo_unexecuted_blocks=1 00:09:05.670 00:09:05.670 ' 00:09:05.670 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.670 --rc genhtml_branch_coverage=1 00:09:05.670 --rc genhtml_function_coverage=1 00:09:05.670 --rc genhtml_legend=1 00:09:05.670 --rc geninfo_all_blocks=1 00:09:05.670 --rc geninfo_unexecuted_blocks=1 00:09:05.670 00:09:05.670 ' 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.671 13:05:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:13.811 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.811 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:13.812 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:13.812 Found net devices under 0000:31:00.0: cvl_0_0 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:13.812 Found net devices under 0000:31:00.1: cvl_0_1 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:09:13.812 00:09:13.812 --- 10.0.0.2 ping statistics --- 00:09:13.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.812 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:09:13.812 00:09:13.812 --- 10.0.0.1 ping statistics --- 00:09:13.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.812 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.812 13:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1576269 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1576269 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1576269 ']' 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.812 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.812 [2024-11-06 13:05:55.089281] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:09:13.812 [2024-11-06 13:05:55.089347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.812 [2024-11-06 13:05:55.190191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.812 [2024-11-06 13:05:55.243969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.812 [2024-11-06 13:05:55.244021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.812 [2024-11-06 13:05:55.244029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.812 [2024-11-06 13:05:55.244037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.812 [2024-11-06 13:05:55.244043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.812 [2024-11-06 13:05:55.246416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.812 [2024-11-06 13:05:55.246616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.812 [2024-11-06 13:05:55.246798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.812 [2024-11-06 13:05:55.246801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.073 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.074 [2024-11-06 13:05:55.946488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.074 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.336 Malloc0 00:09:14.336 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.336 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:14.336 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.336 13:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.336 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.336 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.336 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.336 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.336 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.336 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.336 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.336 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.336 [2024-11-06 13:05:56.024603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:14.337 test case1: single bdev can't be used in multiple subsystems 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.337 [2024-11-06 13:05:56.060538] bdev.c:8189:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:14.337 [2024-11-06 13:05:56.060558] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:14.337 [2024-11-06 13:05:56.060566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.337 request: 00:09:14.337 { 00:09:14.337 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:14.337 "namespace": { 00:09:14.337 "bdev_name": "Malloc0", 00:09:14.337 "no_auto_visible": false 00:09:14.337 }, 00:09:14.337 "method": "nvmf_subsystem_add_ns", 00:09:14.337 "req_id": 1 00:09:14.337 } 00:09:14.337 Got JSON-RPC error response 00:09:14.337 response: 00:09:14.337 { 00:09:14.337 "code": -32602, 00:09:14.337 "message": "Invalid parameters" 00:09:14.337 } 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:14.337 Adding namespace failed - expected result. 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:14.337 test case2: host connect to nvmf target in multiple paths 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.337 [2024-11-06 13:05:56.072672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.337 13:05:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.249 13:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:17.632 13:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.632 13:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:17.632 13:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.632 13:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:17.632 13:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:19.548 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:19.548 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:19.548 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.548 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:19.548 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.548 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:19.548 13:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:19.548 [global] 00:09:19.548 thread=1 00:09:19.548 invalidate=1 00:09:19.548 rw=write 00:09:19.548 time_based=1 00:09:19.548 runtime=1 00:09:19.548 ioengine=libaio 00:09:19.548 direct=1 00:09:19.548 bs=4096 00:09:19.548 iodepth=1 00:09:19.548 norandommap=0 00:09:19.548 numjobs=1 00:09:19.548 00:09:19.548 verify_dump=1 00:09:19.548 verify_backlog=512 00:09:19.548 verify_state_save=0 00:09:19.548 do_verify=1 00:09:19.548 verify=crc32c-intel 00:09:19.548 [job0] 00:09:19.548 filename=/dev/nvme0n1 00:09:19.548 Could not set queue depth (nvme0n1) 00:09:19.808 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.808 fio-3.35 00:09:19.808 Starting 1 thread 00:09:21.192 00:09:21.192 job0: (groupid=0, jobs=1): err= 0: pid=1577907: Wed Nov 6 13:06:02 2024 00:09:21.192 read: IOPS=18, BW=73.3KiB/s (75.0kB/s)(76.0KiB/1037msec) 00:09:21.192 slat (nsec): min=25836, max=27093, avg=26192.68, stdev=387.10 00:09:21.192 clat (usec): min=674, max=42020, avg=39283.10, stdev=9361.34 00:09:21.192 lat (usec): min=701, max=42046, avg=39309.29, stdev=9361.15 00:09:21.192 clat percentiles (usec): 00:09:21.192 | 1.00th=[ 676], 5.00th=[ 676], 10.00th=[41157], 20.00th=[41157], 00:09:21.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:21.192 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:21.192 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:21.192 | 99.99th=[42206] 00:09:21.192 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:09:21.192 slat (nsec): min=9135, max=67780, avg=29765.59, stdev=10054.01 00:09:21.192 clat (usec): min=163, max=783, avg=529.60, stdev=108.69 00:09:21.192 lat (usec): min=173, max=817, avg=559.37, stdev=113.13 00:09:21.192 clat percentiles (usec): 00:09:21.192 | 1.00th=[ 255], 5.00th=[ 343], 10.00th=[ 388], 20.00th=[ 441], 00:09:21.192 | 30.00th=[ 482], 40.00th=[ 506], 50.00th=[ 529], 60.00th=[ 562], 00:09:21.192 | 70.00th=[ 594], 80.00th=[ 627], 90.00th=[ 660], 95.00th=[ 709], 00:09:21.192 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 783], 99.95th=[ 783], 00:09:21.192 | 99.99th=[ 783] 00:09:21.192 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:21.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:21.192 lat (usec) : 250=0.94%, 500=35.40%, 750=59.32%, 1000=0.94% 00:09:21.192 lat (msec) : 50=3.39% 00:09:21.192 cpu : usr=1.54%, sys=1.35%, ctx=531, majf=0, minf=1 00:09:21.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.192 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.192 00:09:21.192 Run status group 0 (all jobs): 00:09:21.192 READ: bw=73.3KiB/s (75.0kB/s), 73.3KiB/s-73.3KiB/s (75.0kB/s-75.0kB/s), io=76.0KiB (77.8kB), run=1037-1037msec 00:09:21.192 WRITE: bw=1975KiB/s (2022kB/s), 1975KiB/s-1975KiB/s (2022kB/s-2022kB/s), io=2048KiB (2097kB), run=1037-1037msec 00:09:21.192 00:09:21.192 Disk stats (read/write): 00:09:21.192 nvme0n1: ios=65/512, merge=0/0, ticks=645/219, in_queue=864, util=93.69% 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.192 13:06:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.192 rmmod nvme_tcp 00:09:21.192 rmmod nvme_fabrics 00:09:21.192 rmmod nvme_keyring 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1576269 ']' 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1576269 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1576269 ']' 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1576269 00:09:21.192 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1576269 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1576269' 00:09:21.453 killing process with pid 1576269 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1576269 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1576269 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.453 13:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.998 00:09:23.998 real 0m18.053s 00:09:23.998 user 0m45.628s 00:09:23.998 sys 0m6.445s 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.998 ************************************ 00:09:23.998 END TEST nvmf_nmic 00:09:23.998 ************************************ 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.998 ************************************ 00:09:23.998 START TEST nvmf_fio_target 00:09:23.998 ************************************ 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:23.998 * Looking for test storage... 00:09:23.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:23.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.998 --rc genhtml_branch_coverage=1 00:09:23.998 --rc genhtml_function_coverage=1 00:09:23.998 --rc genhtml_legend=1 00:09:23.998 --rc geninfo_all_blocks=1 00:09:23.998 --rc geninfo_unexecuted_blocks=1 00:09:23.998 00:09:23.998 ' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:23.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.998 --rc genhtml_branch_coverage=1 00:09:23.998 --rc genhtml_function_coverage=1 00:09:23.998 --rc genhtml_legend=1 00:09:23.998 --rc geninfo_all_blocks=1 00:09:23.998 --rc geninfo_unexecuted_blocks=1 00:09:23.998 00:09:23.998 ' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:23.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.998 --rc genhtml_branch_coverage=1 00:09:23.998 --rc genhtml_function_coverage=1 00:09:23.998 --rc genhtml_legend=1 00:09:23.998 --rc geninfo_all_blocks=1 00:09:23.998 --rc geninfo_unexecuted_blocks=1 00:09:23.998 00:09:23.998 ' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:23.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.998 --rc genhtml_branch_coverage=1 00:09:23.998 --rc genhtml_function_coverage=1 00:09:23.998 --rc genhtml_legend=1 00:09:23.998 --rc geninfo_all_blocks=1 00:09:23.998 --rc geninfo_unexecuted_blocks=1 00:09:23.998 00:09:23.998 ' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.998 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:23.999 13:06:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:32.142 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:32.142 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:32.142 Found net devices under 0000:31:00.0: cvl_0_0 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:32.142 Found net devices under 0000:31:00.1: cvl_0_1 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.142 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.143 13:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:32.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:09:32.143 00:09:32.143 --- 10.0.0.2 ping statistics --- 00:09:32.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.143 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:32.143 00:09:32.143 --- 10.0.0.1 ping statistics --- 00:09:32.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.143 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1582848 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1582848 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1582848 ']' 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:32.143 13:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.143 [2024-11-06 13:06:13.401805] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:09:32.143 [2024-11-06 13:06:13.401874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.143 [2024-11-06 13:06:13.502902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.143 [2024-11-06 13:06:13.556213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.143 [2024-11-06 13:06:13.556264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.143 [2024-11-06 13:06:13.556273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.143 [2024-11-06 13:06:13.556280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.143 [2024-11-06 13:06:13.556287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.143 [2024-11-06 13:06:13.558323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.143 [2024-11-06 13:06:13.558447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.143 [2024-11-06 13:06:13.558606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.143 [2024-11-06 13:06:13.558607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.404 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.404 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:32.404 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.404 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.404 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.404 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.404 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.665 [2024-11-06 13:06:14.435883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.665 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.926 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:32.926 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.188 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:33.188 13:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.450 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:33.450 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.450 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:33.450 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:33.713 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:33.974 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:33.974 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.236 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:34.236 13:06:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.496 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:34.496 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:34.496 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:34.757 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:34.757 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.018 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:35.018 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:35.280 13:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.280 [2024-11-06 13:06:17.093211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.280 13:06:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:35.540 13:06:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:35.800 13:06:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.182 13:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:37.182 13:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:37.182 13:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.182 13:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:37.182 13:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:37.182 13:06:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:39.724 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:39.724 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:39.724 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.724 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:39.724 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.724 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:39.724 13:06:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:39.724 [global] 00:09:39.724 thread=1 00:09:39.724 invalidate=1 00:09:39.724 rw=write 00:09:39.724 time_based=1 00:09:39.724 runtime=1 00:09:39.724 ioengine=libaio 00:09:39.724 direct=1 00:09:39.724 bs=4096 00:09:39.724 iodepth=1 00:09:39.724 norandommap=0 00:09:39.724 numjobs=1 00:09:39.724 00:09:39.724 verify_dump=1 00:09:39.724 verify_backlog=512 00:09:39.724 verify_state_save=0 00:09:39.724 do_verify=1 00:09:39.724 verify=crc32c-intel 00:09:39.724 [job0] 00:09:39.724 filename=/dev/nvme0n1 00:09:39.724 [job1] 00:09:39.724 filename=/dev/nvme0n2 00:09:39.724 [job2] 00:09:39.724 filename=/dev/nvme0n3 00:09:39.724 [job3] 00:09:39.724 filename=/dev/nvme0n4 00:09:39.724 Could not set queue depth (nvme0n1) 00:09:39.724 Could not set queue depth (nvme0n2) 00:09:39.724 Could not set queue depth (nvme0n3) 00:09:39.724 Could not set queue depth (nvme0n4) 00:09:39.724 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.724 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.724 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.724 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.724 fio-3.35 00:09:39.724 Starting 4 threads 00:09:41.107 00:09:41.107 job0: (groupid=0, jobs=1): err= 0: pid=1584696: Wed Nov 6 13:06:22 2024 00:09:41.107 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:09:41.107 slat (nsec): min=24129, max=25414, avg=25090.50, stdev=298.92 00:09:41.107 clat (usec): min=41022, max=42065, avg=41924.39, stdev=209.86 00:09:41.107 lat (usec): min=41047, max=42090, avg=41949.48, stdev=209.81 00:09:41.107 clat percentiles (usec): 00:09:41.107 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:09:41.107 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:41.107 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:41.107 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:41.107 | 99.99th=[42206] 00:09:41.107 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:09:41.107 slat (nsec): min=9357, max=84583, avg=18916.37, stdev=11288.33 00:09:41.107 clat (usec): min=92, max=4091, avg=201.16, stdev=195.12 00:09:41.107 lat (usec): min=104, max=4124, avg=220.08, stdev=199.40 00:09:41.107 clat percentiles (usec): 00:09:41.107 | 1.00th=[ 96], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 115], 00:09:41.107 | 30.00th=[ 117], 40.00th=[ 124], 50.00th=[ 151], 60.00th=[ 210], 00:09:41.107 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 338], 00:09:41.107 | 99.00th=[ 445], 99.50th=[ 529], 99.90th=[ 4080], 99.95th=[ 4080], 00:09:41.107 | 99.99th=[ 4080] 00:09:41.107 bw ( KiB/s): min= 4096, max= 4096, per=32.03%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.107 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.107 lat (usec) : 100=3.56%, 250=58.61%, 500=32.96%, 750=0.56% 00:09:41.107 lat (msec) : 10=0.19%, 50=4.12% 00:09:41.107 cpu : usr=0.58%, sys=0.77%, ctx=534, majf=0, minf=1 00:09:41.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.107 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.107 job1: (groupid=0, jobs=1): err= 0: pid=1584697: Wed Nov 6 13:06:22 2024 00:09:41.107 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:41.107 slat (nsec): min=7665, max=60209, avg=26575.02, stdev=3364.16 00:09:41.107 clat (usec): min=667, max=1363, avg=1058.42, stdev=124.04 00:09:41.107 lat (usec): min=694, max=1389, avg=1085.00, stdev=123.98 00:09:41.107 clat percentiles (usec): 00:09:41.107 | 1.00th=[ 750], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 947], 00:09:41.107 | 30.00th=[ 996], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1090], 00:09:41.107 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1221], 95.00th=[ 1270], 00:09:41.107 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1369], 99.95th=[ 1369], 00:09:41.107 | 99.99th=[ 1369] 00:09:41.107 write: IOPS=758, BW=3033KiB/s (3106kB/s)(3036KiB/1001msec); 0 zone resets 00:09:41.107 slat (nsec): min=8995, max=70004, avg=30318.69, stdev=9403.38 00:09:41.107 clat (usec): min=112, max=923, avg=542.21, stdev=143.20 00:09:41.107 lat (usec): min=123, max=961, avg=572.53, stdev=147.60 00:09:41.107 clat percentiles (usec): 00:09:41.107 | 1.00th=[ 151], 5.00th=[ 289], 10.00th=[ 351], 20.00th=[ 437], 00:09:41.107 | 30.00th=[ 474], 40.00th=[ 523], 50.00th=[ 553], 60.00th=[ 586], 00:09:41.107 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 709], 95.00th=[ 766], 00:09:41.107 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 922], 99.95th=[ 922], 00:09:41.107 | 99.99th=[ 922] 00:09:41.107 bw ( KiB/s): min= 4096, max= 4096, per=32.03%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.107 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.107 lat (usec) : 250=2.12%, 500=18.88%, 750=35.64%, 1000=15.42% 00:09:41.107 lat (msec) : 2=27.93% 00:09:41.107 cpu : usr=2.20%, sys=5.30%, ctx=1271, majf=0, minf=1 00:09:41.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.107 issued rwts: total=512,759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.107 job2: (groupid=0, jobs=1): err= 0: pid=1584699: Wed Nov 6 13:06:22 2024 00:09:41.107 read: IOPS=669, BW=2677KiB/s (2742kB/s)(2680KiB/1001msec) 00:09:41.107 slat (nsec): min=7445, max=63016, avg=24754.49, stdev=8224.27 00:09:41.107 clat (usec): min=445, max=965, avg=745.86, stdev=89.79 00:09:41.107 lat (usec): min=472, max=993, avg=770.61, stdev=90.57 00:09:41.107 clat percentiles (usec): 00:09:41.107 | 1.00th=[ 506], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 668], 00:09:41.107 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 758], 60.00th=[ 775], 00:09:41.108 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 873], 00:09:41.108 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 963], 99.95th=[ 963], 00:09:41.108 | 99.99th=[ 963] 00:09:41.108 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:41.108 slat (nsec): min=10139, max=54701, avg=31009.72, stdev=10470.62 00:09:41.108 clat (usec): min=123, max=2930, avg=429.57, stdev=120.95 00:09:41.108 lat (usec): min=135, max=2966, avg=460.58, stdev=124.14 00:09:41.108 clat percentiles (usec): 00:09:41.108 | 1.00th=[ 206], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 338], 00:09:41.108 | 30.00th=[ 379], 40.00th=[ 420], 50.00th=[ 445], 60.00th=[ 461], 00:09:41.108 | 70.00th=[ 478], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 562], 00:09:41.108 | 99.00th=[ 619], 99.50th=[ 660], 99.90th=[ 824], 99.95th=[ 2933], 00:09:41.108 | 99.99th=[ 2933] 00:09:41.108 bw ( KiB/s): min= 4096, max= 4096, per=32.03%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.108 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.108 lat (usec) : 250=1.00%, 500=47.70%, 750=29.40%, 1000=21.84% 00:09:41.108 lat (msec) : 4=0.06% 00:09:41.108 cpu : usr=2.30%, sys=5.10%, ctx=1695, majf=0, minf=1 00:09:41.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.108 issued rwts: total=670,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.108 job3: (groupid=0, jobs=1): err= 0: pid=1584700: Wed Nov 6 13:06:22 2024 00:09:41.108 read: IOPS=643, BW=2573KiB/s (2635kB/s)(2576KiB/1001msec) 00:09:41.108 slat (nsec): min=7217, max=58617, avg=24720.26, stdev=6013.15 00:09:41.108 clat (usec): min=374, max=1078, avg=798.20, stdev=113.09 00:09:41.108 lat (usec): min=383, max=1104, avg=822.92, stdev=114.48 00:09:41.108 clat percentiles (usec): 00:09:41.108 | 1.00th=[ 433], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 701], 00:09:41.108 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 807], 60.00th=[ 840], 00:09:41.108 | 70.00th=[ 873], 80.00th=[ 906], 90.00th=[ 930], 95.00th=[ 955], 00:09:41.108 | 99.00th=[ 1004], 99.50th=[ 1029], 99.90th=[ 1074], 99.95th=[ 1074], 00:09:41.108 | 99.99th=[ 1074] 00:09:41.108 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:41.108 slat (nsec): min=10239, max=79930, avg=32586.85, stdev=8021.87 00:09:41.108 clat (usec): min=110, max=762, avg=414.66, stdev=114.39 00:09:41.108 lat (usec): min=122, max=796, avg=447.25, stdev=115.72 00:09:41.108 clat percentiles (usec): 00:09:41.108 | 1.00th=[ 198], 5.00th=[ 255], 10.00th=[ 297], 20.00th=[ 318], 00:09:41.108 | 30.00th=[ 334], 40.00th=[ 359], 50.00th=[ 400], 60.00th=[ 441], 00:09:41.108 | 70.00th=[ 469], 80.00th=[ 523], 90.00th=[ 578], 95.00th=[ 611], 00:09:41.108 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 734], 99.95th=[ 766], 00:09:41.108 | 99.99th=[ 766] 00:09:41.108 bw ( KiB/s): min= 4096, max= 4096, per=32.03%, avg=4096.00, stdev= 0.00, samples=1 00:09:41.108 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:41.108 lat (usec) : 250=2.94%, 500=44.66%, 750=25.48%, 1000=26.38% 00:09:41.108 lat (msec) : 2=0.54% 00:09:41.108 cpu : usr=2.10%, sys=5.40%, ctx=1670, majf=0, minf=1 00:09:41.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.108 issued rwts: total=644,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.108 00:09:41.108 Run status group 0 (all jobs): 00:09:41.108 READ: bw=7121KiB/s (7292kB/s), 84.8KiB/s-2677KiB/s (86.8kB/s-2742kB/s), io=7392KiB (7569kB), run=1001-1038msec 00:09:41.108 WRITE: bw=12.5MiB/s (13.1MB/s), 1973KiB/s-4092KiB/s (2020kB/s-4190kB/s), io=13.0MiB (13.6MB), run=1001-1038msec 00:09:41.108 00:09:41.108 Disk stats (read/write): 00:09:41.108 nvme0n1: ios=71/512, merge=0/0, ticks=940/96, in_queue=1036, util=85.49% 00:09:41.108 nvme0n2: ios=491/512, merge=0/0, ticks=546/202, in_queue=748, util=87.03% 00:09:41.108 nvme0n3: ios=569/810, merge=0/0, ticks=785/329, in_queue=1114, util=100.00% 00:09:41.108 nvme0n4: ios=534/792, merge=0/0, ticks=1279/311, in_queue=1590, util=99.89% 00:09:41.108 13:06:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:41.108 [global] 00:09:41.108 thread=1 00:09:41.108 invalidate=1 00:09:41.108 rw=randwrite 00:09:41.108 time_based=1 00:09:41.108 runtime=1 00:09:41.108 ioengine=libaio 00:09:41.108 direct=1 00:09:41.108 bs=4096 00:09:41.108 iodepth=1 00:09:41.108 norandommap=0 00:09:41.108 numjobs=1 00:09:41.108 00:09:41.108 verify_dump=1 00:09:41.108 verify_backlog=512 00:09:41.108 verify_state_save=0 00:09:41.108 do_verify=1 00:09:41.108 verify=crc32c-intel 00:09:41.108 [job0] 00:09:41.108 filename=/dev/nvme0n1 00:09:41.108 [job1] 00:09:41.108 filename=/dev/nvme0n2 00:09:41.108 [job2] 00:09:41.108 filename=/dev/nvme0n3 00:09:41.108 [job3] 00:09:41.108 filename=/dev/nvme0n4 00:09:41.108 Could not set queue depth (nvme0n1) 00:09:41.108 Could not set queue depth (nvme0n2) 00:09:41.108 Could not set queue depth (nvme0n3) 00:09:41.108 Could not set queue depth (nvme0n4) 00:09:41.368 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.368 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.368 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.368 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.368 fio-3.35 00:09:41.368 Starting 4 threads 00:09:42.752 00:09:42.752 job0: (groupid=0, jobs=1): err= 0: pid=1585222: Wed Nov 6 13:06:24 2024 00:09:42.752 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:42.752 slat (nsec): min=7631, max=57179, avg=26196.59, stdev=3735.02 00:09:42.752 clat (usec): min=635, max=1239, avg=996.03, stdev=82.30 00:09:42.752 lat (usec): min=662, max=1265, avg=1022.23, stdev=82.62 00:09:42.752 clat percentiles (usec): 00:09:42.752 | 1.00th=[ 775], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 938], 00:09:42.752 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1029], 00:09:42.752 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:09:42.752 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:09:42.752 | 99.99th=[ 1237] 00:09:42.752 write: IOPS=731, BW=2925KiB/s (2995kB/s)(2928KiB/1001msec); 0 zone resets 00:09:42.752 slat (nsec): min=8916, max=60171, avg=28560.81, stdev=9089.85 00:09:42.752 clat (usec): min=260, max=4107, avg=609.06, stdev=170.68 00:09:42.752 lat (usec): min=270, max=4118, avg=637.62, stdev=172.50 00:09:42.752 clat percentiles (usec): 00:09:42.752 | 1.00th=[ 326], 5.00th=[ 392], 10.00th=[ 449], 20.00th=[ 510], 00:09:42.752 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:09:42.752 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 766], 00:09:42.752 | 99.00th=[ 816], 99.50th=[ 848], 99.90th=[ 4113], 99.95th=[ 4113], 00:09:42.752 | 99.99th=[ 4113] 00:09:42.752 bw ( KiB/s): min= 4096, max= 4096, per=36.34%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.752 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.752 lat (usec) : 500=10.53%, 750=44.45%, 1000=22.11% 00:09:42.752 lat (msec) : 2=22.83%, 10=0.08% 00:09:42.752 cpu : usr=3.00%, sys=4.20%, ctx=1246, majf=0, minf=1 00:09:42.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.753 issued rwts: total=512,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.753 job1: (groupid=0, jobs=1): err= 0: pid=1585223: Wed Nov 6 13:06:24 2024 00:09:42.753 read: IOPS=562, BW=2250KiB/s (2304kB/s)(2252KiB/1001msec) 00:09:42.753 slat (nsec): min=6940, max=45083, avg=24970.85, stdev=6393.77 00:09:42.753 clat (usec): min=332, max=982, avg=739.92, stdev=113.90 00:09:42.753 lat (usec): min=359, max=1009, avg=764.90, stdev=114.72 00:09:42.753 clat percentiles (usec): 00:09:42.753 | 1.00th=[ 457], 5.00th=[ 545], 10.00th=[ 578], 20.00th=[ 635], 00:09:42.753 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 791], 00:09:42.753 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 881], 95.00th=[ 898], 00:09:42.753 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 979], 99.95th=[ 979], 00:09:42.753 | 99.99th=[ 979] 00:09:42.753 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:42.753 slat (nsec): min=9598, max=67559, avg=32356.14, stdev=7573.24 00:09:42.753 clat (usec): min=113, max=754, avg=511.04, stdev=113.34 00:09:42.753 lat (usec): min=124, max=786, avg=543.40, stdev=115.78 00:09:42.753 clat percentiles (usec): 00:09:42.753 | 1.00th=[ 255], 5.00th=[ 314], 10.00th=[ 363], 20.00th=[ 408], 00:09:42.753 | 30.00th=[ 453], 40.00th=[ 490], 50.00th=[ 515], 60.00th=[ 545], 00:09:42.753 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 676], 00:09:42.753 | 99.00th=[ 717], 99.50th=[ 725], 99.90th=[ 750], 99.95th=[ 758], 00:09:42.753 | 99.99th=[ 758] 00:09:42.753 bw ( KiB/s): min= 4096, max= 4096, per=36.34%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.753 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.753 lat (usec) : 250=0.57%, 500=29.05%, 750=52.30%, 1000=18.08% 00:09:42.753 cpu : usr=3.20%, sys=4.10%, ctx=1588, majf=0, minf=1 00:09:42.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.753 issued rwts: total=563,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.753 job2: (groupid=0, jobs=1): err= 0: pid=1585224: Wed Nov 6 13:06:24 2024 00:09:42.753 read: IOPS=45, BW=184KiB/s (188kB/s)(188KiB/1022msec) 00:09:42.753 slat (nsec): min=9728, max=30678, avg=26381.81, stdev=2557.24 00:09:42.753 clat (usec): min=941, max=42882, avg=14213.42, stdev=19289.61 00:09:42.753 lat (usec): min=968, max=42909, avg=14239.81, stdev=19290.00 00:09:42.753 clat percentiles (usec): 00:09:42.753 | 1.00th=[ 938], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1106], 00:09:42.753 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1270], 00:09:42.753 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:42.753 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:42.753 | 99.99th=[42730] 00:09:42.753 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:09:42.753 slat (nsec): min=9257, max=68799, avg=30066.99, stdev=8945.31 00:09:42.753 clat (usec): min=309, max=1063, avg=650.12, stdev=125.95 00:09:42.753 lat (usec): min=319, max=1115, avg=680.19, stdev=129.65 00:09:42.753 clat percentiles (usec): 00:09:42.753 | 1.00th=[ 367], 5.00th=[ 441], 10.00th=[ 482], 20.00th=[ 545], 00:09:42.753 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 693], 00:09:42.753 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 865], 00:09:42.753 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1057], 99.95th=[ 1057], 00:09:42.753 | 99.99th=[ 1057] 00:09:42.753 bw ( KiB/s): min= 4096, max= 4096, per=36.34%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.753 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.753 lat (usec) : 500=11.45%, 750=62.61%, 1000=17.53% 00:09:42.753 lat (msec) : 2=5.72%, 50=2.68% 00:09:42.753 cpu : usr=0.98%, sys=2.15%, ctx=560, majf=0, minf=1 00:09:42.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.753 issued rwts: total=47,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.753 job3: (groupid=0, jobs=1): err= 0: pid=1585225: Wed Nov 6 13:06:24 2024 00:09:42.753 read: IOPS=512, BW=2048KiB/s (2097kB/s)(2048KiB/1000msec) 00:09:42.753 slat (nsec): min=7691, max=47808, avg=27582.05, stdev=3558.89 00:09:42.753 clat (usec): min=693, max=1452, avg=1103.05, stdev=119.58 00:09:42.753 lat (usec): min=704, max=1479, avg=1130.64, stdev=119.98 00:09:42.753 clat percentiles (usec): 00:09:42.753 | 1.00th=[ 783], 5.00th=[ 898], 10.00th=[ 938], 20.00th=[ 996], 00:09:42.753 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1139], 00:09:42.753 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1287], 00:09:42.753 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1450], 99.95th=[ 1450], 00:09:42.753 | 99.99th=[ 1450] 00:09:42.753 write: IOPS=612, BW=2448KiB/s (2507kB/s)(2448KiB/1000msec); 0 zone resets 00:09:42.753 slat (nsec): min=9638, max=56637, avg=32592.13, stdev=8485.75 00:09:42.753 clat (usec): min=232, max=1085, avg=640.03, stdev=138.87 00:09:42.753 lat (usec): min=267, max=1120, avg=672.62, stdev=141.38 00:09:42.753 clat percentiles (usec): 00:09:42.753 | 1.00th=[ 330], 5.00th=[ 424], 10.00th=[ 461], 20.00th=[ 523], 00:09:42.753 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:09:42.753 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 816], 95.00th=[ 881], 00:09:42.753 | 99.00th=[ 971], 99.50th=[ 1020], 99.90th=[ 1090], 99.95th=[ 1090], 00:09:42.753 | 99.99th=[ 1090] 00:09:42.753 bw ( KiB/s): min= 4096, max= 4096, per=36.34%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.753 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.753 lat (usec) : 250=0.09%, 500=8.81%, 750=34.79%, 1000=19.84% 00:09:42.753 lat (msec) : 2=36.48% 00:09:42.753 cpu : usr=2.00%, sys=5.00%, ctx=1126, majf=0, minf=1 00:09:42.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.753 issued rwts: total=512,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.753 00:09:42.753 Run status group 0 (all jobs): 00:09:42.753 READ: bw=6395KiB/s (6549kB/s), 184KiB/s-2250KiB/s (188kB/s-2304kB/s), io=6536KiB (6693kB), run=1000-1022msec 00:09:42.753 WRITE: bw=11.0MiB/s (11.5MB/s), 2004KiB/s-4092KiB/s (2052kB/s-4190kB/s), io=11.2MiB (11.8MB), run=1000-1022msec 00:09:42.753 00:09:42.753 Disk stats (read/write): 00:09:42.753 nvme0n1: ios=542/512, merge=0/0, ticks=506/257, in_queue=763, util=88.18% 00:09:42.753 nvme0n2: ios=543/803, merge=0/0, ticks=569/380, in_queue=949, util=96.84% 00:09:42.753 nvme0n3: ios=33/512, merge=0/0, ticks=678/265, in_queue=943, util=92.33% 00:09:42.753 nvme0n4: ios=456/512, merge=0/0, ticks=1352/250, in_queue=1602, util=97.56% 00:09:42.753 13:06:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:42.753 [global] 00:09:42.753 thread=1 00:09:42.753 invalidate=1 00:09:42.753 rw=write 00:09:42.753 time_based=1 00:09:42.753 runtime=1 00:09:42.753 ioengine=libaio 00:09:42.753 direct=1 00:09:42.753 bs=4096 00:09:42.753 iodepth=128 00:09:42.753 norandommap=0 00:09:42.753 numjobs=1 00:09:42.753 00:09:42.753 verify_dump=1 00:09:42.753 verify_backlog=512 00:09:42.753 verify_state_save=0 00:09:42.753 do_verify=1 00:09:42.753 verify=crc32c-intel 00:09:42.753 [job0] 00:09:42.753 filename=/dev/nvme0n1 00:09:42.753 [job1] 00:09:42.753 filename=/dev/nvme0n2 00:09:42.753 [job2] 00:09:42.753 filename=/dev/nvme0n3 00:09:42.753 [job3] 00:09:42.753 filename=/dev/nvme0n4 00:09:42.753 Could not set queue depth (nvme0n1) 00:09:42.753 Could not set queue depth (nvme0n2) 00:09:42.753 Could not set queue depth (nvme0n3) 00:09:42.753 Could not set queue depth (nvme0n4) 00:09:43.013 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.013 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.013 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.013 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.013 fio-3.35 00:09:43.013 Starting 4 threads 00:09:44.396 00:09:44.396 job0: (groupid=0, jobs=1): err= 0: pid=1585746: Wed Nov 6 13:06:26 2024 00:09:44.396 read: IOPS=5328, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1004msec) 00:09:44.396 slat (nsec): min=1004, max=13729k, avg=98431.84, stdev=676620.75 00:09:44.396 clat (usec): min=2404, max=73354, avg=12085.94, stdev=8975.00 00:09:44.396 lat (usec): min=3530, max=73361, avg=12184.38, stdev=9055.31 00:09:44.396 clat percentiles (usec): 00:09:44.396 | 1.00th=[ 4359], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6652], 00:09:44.396 | 30.00th=[ 7635], 40.00th=[ 8586], 50.00th=[10290], 60.00th=[11469], 00:09:44.396 | 70.00th=[12125], 80.00th=[14091], 90.00th=[17957], 95.00th=[23200], 00:09:44.396 | 99.00th=[56886], 99.50th=[61604], 99.90th=[69731], 99.95th=[72877], 00:09:44.396 | 99.99th=[72877] 00:09:44.396 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:09:44.396 slat (nsec): min=1711, max=7952.5k, avg=78364.81, stdev=485179.34 00:09:44.396 clat (usec): min=2269, max=73340, avg=11101.74, stdev=9675.07 00:09:44.396 lat (usec): min=2277, max=73352, avg=11180.10, stdev=9724.73 00:09:44.396 clat percentiles (usec): 00:09:44.396 | 1.00th=[ 3392], 5.00th=[ 3884], 10.00th=[ 4621], 20.00th=[ 5473], 00:09:44.396 | 30.00th=[ 5735], 40.00th=[ 6718], 50.00th=[ 8455], 60.00th=[10552], 00:09:44.396 | 70.00th=[11994], 80.00th=[12518], 90.00th=[17695], 95.00th=[33162], 00:09:44.396 | 99.00th=[53740], 99.50th=[58983], 99.90th=[63177], 99.95th=[63177], 00:09:44.396 | 99.99th=[72877] 00:09:44.396 bw ( KiB/s): min=21536, max=23520, per=27.38%, avg=22528.00, stdev=1402.90, samples=2 00:09:44.396 iops : min= 5384, max= 5880, avg=5632.00, stdev=350.72, samples=2 00:09:44.396 lat (msec) : 4=3.77%, 10=48.80%, 20=38.77%, 50=6.60%, 100=2.06% 00:09:44.396 cpu : usr=4.29%, sys=6.48%, ctx=408, majf=0, minf=1 00:09:44.396 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:44.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.396 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.396 issued rwts: total=5350,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.396 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.396 job1: (groupid=0, jobs=1): err= 0: pid=1585747: Wed Nov 6 13:06:26 2024 00:09:44.396 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:09:44.397 slat (nsec): min=1018, max=21201k, avg=112368.78, stdev=807942.58 00:09:44.397 clat (usec): min=3348, max=63528, avg=13161.34, stdev=8187.76 00:09:44.397 lat (usec): min=3355, max=63537, avg=13273.71, stdev=8260.38 00:09:44.397 clat percentiles (usec): 00:09:44.397 | 1.00th=[ 4490], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 7963], 00:09:44.397 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[10683], 60.00th=[12649], 00:09:44.397 | 70.00th=[13960], 80.00th=[16450], 90.00th=[22414], 95.00th=[30278], 00:09:44.397 | 99.00th=[50070], 99.50th=[56886], 99.90th=[63701], 99.95th=[63701], 00:09:44.397 | 99.99th=[63701] 00:09:44.397 write: IOPS=4497, BW=17.6MiB/s (18.4MB/s)(17.7MiB/1010msec); 0 zone resets 00:09:44.397 slat (nsec): min=1692, max=11059k, avg=113320.62, stdev=584512.98 00:09:44.397 clat (usec): min=1197, max=67453, avg=16329.19, stdev=13551.19 00:09:44.397 lat (usec): min=1246, max=67462, avg=16442.51, stdev=13635.74 00:09:44.397 clat percentiles (usec): 00:09:44.397 | 1.00th=[ 2802], 5.00th=[ 4293], 10.00th=[ 5669], 20.00th=[ 7373], 00:09:44.397 | 30.00th=[ 9896], 40.00th=[11338], 50.00th=[12256], 60.00th=[12518], 00:09:44.397 | 70.00th=[15664], 80.00th=[17433], 90.00th=[34341], 95.00th=[56361], 00:09:44.397 | 99.00th=[61080], 99.50th=[63701], 99.90th=[67634], 99.95th=[67634], 00:09:44.397 | 99.99th=[67634] 00:09:44.397 bw ( KiB/s): min=13960, max=21360, per=21.46%, avg=17660.00, stdev=5232.59, samples=2 00:09:44.397 iops : min= 3490, max= 5340, avg=4415.00, stdev=1308.15, samples=2 00:09:44.397 lat (msec) : 2=0.08%, 4=2.48%, 10=36.43%, 20=44.69%, 50=12.55% 00:09:44.397 lat (msec) : 100=3.77% 00:09:44.397 cpu : usr=3.87%, sys=4.66%, ctx=444, majf=0, minf=1 00:09:44.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:44.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.397 issued rwts: total=4096,4542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.397 job2: (groupid=0, jobs=1): err= 0: pid=1585748: Wed Nov 6 13:06:26 2024 00:09:44.397 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:09:44.397 slat (nsec): min=1051, max=12161k, avg=86568.96, stdev=657755.93 00:09:44.397 clat (usec): min=3705, max=25055, avg=11079.22, stdev=3225.63 00:09:44.397 lat (usec): min=3711, max=25082, avg=11165.79, stdev=3283.79 00:09:44.397 clat percentiles (usec): 00:09:44.397 | 1.00th=[ 4817], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7504], 00:09:44.397 | 30.00th=[ 8979], 40.00th=[10814], 50.00th=[11600], 60.00th=[11863], 00:09:44.397 | 70.00th=[12387], 80.00th=[13829], 90.00th=[15401], 95.00th=[15926], 00:09:44.397 | 99.00th=[19268], 99.50th=[20841], 99.90th=[23725], 99.95th=[23725], 00:09:44.397 | 99.99th=[25035] 00:09:44.397 write: IOPS=5254, BW=20.5MiB/s (21.5MB/s)(20.7MiB/1009msec); 0 zone resets 00:09:44.397 slat (nsec): min=1755, max=18548k, avg=99326.11, stdev=622564.05 00:09:44.397 clat (usec): min=2590, max=58492, avg=13041.07, stdev=10380.40 00:09:44.397 lat (usec): min=2598, max=58512, avg=13140.39, stdev=10447.23 00:09:44.397 clat percentiles (usec): 00:09:44.397 | 1.00th=[ 3982], 5.00th=[ 4948], 10.00th=[ 5866], 20.00th=[ 6456], 00:09:44.397 | 30.00th=[ 7111], 40.00th=[ 8586], 50.00th=[10028], 60.00th=[11338], 00:09:44.397 | 70.00th=[12256], 80.00th=[13173], 90.00th=[28181], 95.00th=[41157], 00:09:44.397 | 99.00th=[52691], 99.50th=[54789], 99.90th=[58459], 99.95th=[58459], 00:09:44.397 | 99.99th=[58459] 00:09:44.397 bw ( KiB/s): min=20480, max=20912, per=25.15%, avg=20696.00, stdev=305.47, samples=2 00:09:44.397 iops : min= 5120, max= 5228, avg=5174.00, stdev=76.37, samples=2 00:09:44.397 lat (msec) : 4=0.69%, 10=41.75%, 20=49.59%, 50=7.11%, 100=0.86% 00:09:44.397 cpu : usr=4.07%, sys=6.45%, ctx=395, majf=0, minf=1 00:09:44.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:44.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.397 issued rwts: total=5120,5302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.397 job3: (groupid=0, jobs=1): err= 0: pid=1585749: Wed Nov 6 13:06:26 2024 00:09:44.397 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:09:44.397 slat (nsec): min=1036, max=18851k, avg=92067.54, stdev=722730.43 00:09:44.397 clat (usec): min=5290, max=47730, avg=11570.33, stdev=6992.74 00:09:44.397 lat (usec): min=5296, max=47740, avg=11662.40, stdev=7052.31 00:09:44.397 clat percentiles (usec): 00:09:44.397 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7308], 00:09:44.397 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9372], 60.00th=[10421], 00:09:44.397 | 70.00th=[11076], 80.00th=[13042], 90.00th=[20055], 95.00th=[26346], 00:09:44.397 | 99.00th=[43254], 99.50th=[44827], 99.90th=[46924], 99.95th=[47973], 00:09:44.397 | 99.99th=[47973] 00:09:44.397 write: IOPS=5272, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1005msec); 0 zone resets 00:09:44.397 slat (nsec): min=1768, max=40158k, avg=94199.57, stdev=739314.20 00:09:44.397 clat (usec): min=2711, max=60391, avg=12395.87, stdev=9325.18 00:09:44.397 lat (usec): min=2720, max=60399, avg=12490.07, stdev=9387.16 00:09:44.397 clat percentiles (usec): 00:09:44.397 | 1.00th=[ 3490], 5.00th=[ 4359], 10.00th=[ 4555], 20.00th=[ 6521], 00:09:44.397 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[12256], 00:09:44.397 | 70.00th=[12518], 80.00th=[15533], 90.00th=[20055], 95.00th=[31065], 00:09:44.397 | 99.00th=[56886], 99.50th=[57934], 99.90th=[60556], 99.95th=[60556], 00:09:44.397 | 99.99th=[60556] 00:09:44.397 bw ( KiB/s): min=14640, max=26736, per=25.14%, avg=20688.00, stdev=8553.16, samples=2 00:09:44.397 iops : min= 3660, max= 6684, avg=5172.00, stdev=2138.29, samples=2 00:09:44.397 lat (msec) : 4=0.84%, 10=52.40%, 20=36.54%, 50=9.21%, 100=1.00% 00:09:44.397 cpu : usr=4.78%, sys=5.18%, ctx=444, majf=0, minf=1 00:09:44.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:44.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.397 issued rwts: total=5120,5299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.397 00:09:44.397 Run status group 0 (all jobs): 00:09:44.397 READ: bw=76.1MiB/s (79.8MB/s), 15.8MiB/s-20.8MiB/s (16.6MB/s-21.8MB/s), io=76.9MiB (80.6MB), run=1004-1010msec 00:09:44.397 WRITE: bw=80.3MiB/s (84.3MB/s), 17.6MiB/s-21.9MiB/s (18.4MB/s-23.0MB/s), io=81.2MiB (85.1MB), run=1004-1010msec 00:09:44.397 00:09:44.397 Disk stats (read/write): 00:09:44.397 nvme0n1: ios=4373/4608, merge=0/0, ticks=51447/51565, in_queue=103012, util=87.37% 00:09:44.397 nvme0n2: ios=4006/4096, merge=0/0, ticks=49763/52577, in_queue=102340, util=91.24% 00:09:44.397 nvme0n3: ios=4346/4608, merge=0/0, ticks=44480/56171, in_queue=100651, util=95.49% 00:09:44.397 nvme0n4: ios=3633/3952, merge=0/0, ticks=45797/53876, in_queue=99673, util=96.71% 00:09:44.397 13:06:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:44.397 [global] 00:09:44.397 thread=1 00:09:44.397 invalidate=1 00:09:44.397 rw=randwrite 00:09:44.397 time_based=1 00:09:44.397 runtime=1 00:09:44.397 ioengine=libaio 00:09:44.397 direct=1 00:09:44.397 bs=4096 00:09:44.397 iodepth=128 00:09:44.397 norandommap=0 00:09:44.397 numjobs=1 00:09:44.397 00:09:44.397 verify_dump=1 00:09:44.397 verify_backlog=512 00:09:44.397 verify_state_save=0 00:09:44.397 do_verify=1 00:09:44.397 verify=crc32c-intel 00:09:44.397 [job0] 00:09:44.397 filename=/dev/nvme0n1 00:09:44.397 [job1] 00:09:44.397 filename=/dev/nvme0n2 00:09:44.397 [job2] 00:09:44.397 filename=/dev/nvme0n3 00:09:44.397 [job3] 00:09:44.397 filename=/dev/nvme0n4 00:09:44.397 Could not set queue depth (nvme0n1) 00:09:44.397 Could not set queue depth (nvme0n2) 00:09:44.397 Could not set queue depth (nvme0n3) 00:09:44.397 Could not set queue depth (nvme0n4) 00:09:44.657 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.657 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.657 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.657 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.657 fio-3.35 00:09:44.657 Starting 4 threads 00:09:46.040 00:09:46.040 job0: (groupid=0, jobs=1): err= 0: pid=1586271: Wed Nov 6 13:06:27 2024 00:09:46.040 read: IOPS=8014, BW=31.3MiB/s (32.8MB/s)(31.5MiB/1006msec) 00:09:46.040 slat (nsec): min=917, max=11920k, avg=63060.34, stdev=495521.45 00:09:46.040 clat (usec): min=1410, max=40342, avg=8516.70, stdev=3829.65 00:09:46.040 lat (usec): min=2124, max=40729, avg=8579.76, stdev=3866.28 00:09:46.040 clat percentiles (usec): 00:09:46.040 | 1.00th=[ 3785], 5.00th=[ 5211], 10.00th=[ 5866], 20.00th=[ 6325], 00:09:46.040 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 7963], 00:09:46.040 | 70.00th=[ 8586], 80.00th=[10290], 90.00th=[12256], 95.00th=[14353], 00:09:46.040 | 99.00th=[22414], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:09:46.040 | 99.99th=[40109] 00:09:46.040 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:09:46.040 slat (nsec): min=1550, max=7837.3k, avg=45694.04, stdev=343597.34 00:09:46.040 clat (usec): min=832, max=52730, avg=7195.18, stdev=4599.81 00:09:46.040 lat (usec): min=860, max=52739, avg=7240.87, stdev=4614.02 00:09:46.040 clat percentiles (usec): 00:09:46.040 | 1.00th=[ 1663], 5.00th=[ 3359], 10.00th=[ 4047], 20.00th=[ 4752], 00:09:46.040 | 30.00th=[ 5407], 40.00th=[ 6194], 50.00th=[ 6456], 60.00th=[ 6652], 00:09:46.040 | 70.00th=[ 7046], 80.00th=[ 8455], 90.00th=[10683], 95.00th=[13435], 00:09:46.040 | 99.00th=[28967], 99.50th=[40633], 99.90th=[51119], 99.95th=[52691], 00:09:46.040 | 99.99th=[52691] 00:09:46.040 bw ( KiB/s): min=29312, max=36151, per=33.53%, avg=32731.50, stdev=4835.90, samples=2 00:09:46.040 iops : min= 7328, max= 9037, avg=8182.50, stdev=1208.45, samples=2 00:09:46.040 lat (usec) : 1000=0.11% 00:09:46.040 lat (msec) : 2=0.62%, 4=4.72%, 10=77.74%, 20=15.51%, 50=1.18% 00:09:46.040 lat (msec) : 100=0.12% 00:09:46.040 cpu : usr=5.57%, sys=9.85%, ctx=479, majf=0, minf=1 00:09:46.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:46.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.040 issued rwts: total=8063,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.040 job1: (groupid=0, jobs=1): err= 0: pid=1586272: Wed Nov 6 13:06:27 2024 00:09:46.040 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:09:46.040 slat (nsec): min=923, max=16060k, avg=73456.61, stdev=572732.88 00:09:46.040 clat (usec): min=3767, max=30707, avg=9588.21, stdev=3279.83 00:09:46.040 lat (usec): min=3772, max=30718, avg=9661.67, stdev=3322.89 00:09:46.040 clat percentiles (usec): 00:09:46.040 | 1.00th=[ 4621], 5.00th=[ 5276], 10.00th=[ 6783], 20.00th=[ 7635], 00:09:46.040 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9241], 00:09:46.040 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[14353], 95.00th=[17171], 00:09:46.040 | 99.00th=[20841], 99.50th=[22676], 99.90th=[25560], 99.95th=[25560], 00:09:46.040 | 99.99th=[30802] 00:09:46.040 write: IOPS=7112, BW=27.8MiB/s (29.1MB/s)(27.9MiB/1005msec); 0 zone resets 00:09:46.040 slat (nsec): min=1540, max=7426.5k, avg=60423.64, stdev=403304.39 00:09:46.040 clat (usec): min=1019, max=25463, avg=8914.72, stdev=4182.49 00:09:46.040 lat (usec): min=1027, max=25467, avg=8975.15, stdev=4214.08 00:09:46.040 clat percentiles (usec): 00:09:46.040 | 1.00th=[ 2311], 5.00th=[ 3949], 10.00th=[ 4424], 20.00th=[ 5407], 00:09:46.040 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7570], 60.00th=[ 8160], 00:09:46.040 | 70.00th=[ 9765], 80.00th=[12911], 90.00th=[15664], 95.00th=[17433], 00:09:46.040 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20317], 99.95th=[23725], 00:09:46.040 | 99.99th=[25560] 00:09:46.040 bw ( KiB/s): min=26768, max=29333, per=28.74%, avg=28050.50, stdev=1813.73, samples=2 00:09:46.040 iops : min= 6692, max= 7333, avg=7012.50, stdev=453.26, samples=2 00:09:46.040 lat (msec) : 2=0.35%, 4=2.71%, 10=67.43%, 20=28.68%, 50=0.83% 00:09:46.040 cpu : usr=4.78%, sys=7.77%, ctx=443, majf=0, minf=1 00:09:46.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:46.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.040 issued rwts: total=6656,7148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.040 job2: (groupid=0, jobs=1): err= 0: pid=1586273: Wed Nov 6 13:06:27 2024 00:09:46.040 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:09:46.040 slat (nsec): min=974, max=22508k, avg=145894.67, stdev=1006358.06 00:09:46.040 clat (usec): min=6364, max=59202, avg=18388.13, stdev=11519.40 00:09:46.040 lat (usec): min=6375, max=62938, avg=18534.03, stdev=11618.43 00:09:46.040 clat percentiles (usec): 00:09:46.040 | 1.00th=[ 7242], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[11076], 00:09:46.040 | 30.00th=[11731], 40.00th=[13042], 50.00th=[13435], 60.00th=[13960], 00:09:46.040 | 70.00th=[15795], 80.00th=[25035], 90.00th=[41157], 95.00th=[45351], 00:09:46.040 | 99.00th=[51643], 99.50th=[54264], 99.90th=[58983], 99.95th=[58983], 00:09:46.040 | 99.99th=[58983] 00:09:46.040 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:46.040 slat (nsec): min=1649, max=15842k, avg=198719.39, stdev=972433.88 00:09:46.040 clat (usec): min=2039, max=94714, avg=25647.45, stdev=22378.85 00:09:46.040 lat (usec): min=2419, max=94723, avg=25846.17, stdev=22539.08 00:09:46.040 clat percentiles (usec): 00:09:46.040 | 1.00th=[ 3982], 5.00th=[ 7504], 10.00th=[ 7898], 20.00th=[ 8455], 00:09:46.040 | 30.00th=[10159], 40.00th=[13304], 50.00th=[15008], 60.00th=[20841], 00:09:46.040 | 70.00th=[28705], 80.00th=[37487], 90.00th=[66847], 95.00th=[76022], 00:09:46.040 | 99.00th=[91751], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:09:46.040 | 99.99th=[94897] 00:09:46.040 bw ( KiB/s): min=10299, max=13192, per=12.03%, avg=11745.50, stdev=2045.66, samples=2 00:09:46.040 iops : min= 2574, max= 3298, avg=2936.00, stdev=511.95, samples=2 00:09:46.040 lat (msec) : 4=0.59%, 10=17.86%, 20=47.94%, 50=24.67%, 100=8.94% 00:09:46.040 cpu : usr=1.60%, sys=3.89%, ctx=378, majf=0, minf=1 00:09:46.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:46.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.040 issued rwts: total=2560,3066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.041 job3: (groupid=0, jobs=1): err= 0: pid=1586274: Wed Nov 6 13:06:27 2024 00:09:46.041 read: IOPS=5999, BW=23.4MiB/s (24.6MB/s)(23.5MiB/1002msec) 00:09:46.041 slat (nsec): min=974, max=10783k, avg=83256.77, stdev=478822.27 00:09:46.041 clat (usec): min=972, max=35841, avg=10339.78, stdev=3853.97 00:09:46.041 lat (usec): min=3172, max=35872, avg=10423.03, stdev=3892.62 00:09:46.041 clat percentiles (usec): 00:09:46.041 | 1.00th=[ 6194], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8586], 00:09:46.041 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:09:46.041 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[12780], 95.00th=[20055], 00:09:46.041 | 99.00th=[27657], 99.50th=[30016], 99.90th=[30802], 99.95th=[30802], 00:09:46.041 | 99.99th=[35914] 00:09:46.041 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:09:46.041 slat (nsec): min=1593, max=12466k, avg=77818.41, stdev=434431.80 00:09:46.041 clat (usec): min=5822, max=39997, avg=10485.66, stdev=4690.09 00:09:46.041 lat (usec): min=5824, max=40030, avg=10563.48, stdev=4727.36 00:09:46.041 clat percentiles (usec): 00:09:46.041 | 1.00th=[ 6390], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8029], 00:09:46.041 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9110], 00:09:46.041 | 70.00th=[ 9503], 80.00th=[11338], 90.00th=[17695], 95.00th=[21103], 00:09:46.041 | 99.00th=[30278], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:09:46.041 | 99.99th=[40109] 00:09:46.041 bw ( KiB/s): min=21780, max=27328, per=25.15%, avg=24554.00, stdev=3923.03, samples=2 00:09:46.041 iops : min= 5445, max= 6832, avg=6138.50, stdev=980.76, samples=2 00:09:46.041 lat (usec) : 1000=0.01% 00:09:46.041 lat (msec) : 4=0.35%, 10=74.42%, 20=19.02%, 50=6.20% 00:09:46.041 cpu : usr=3.20%, sys=4.20%, ctx=747, majf=0, minf=2 00:09:46.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:46.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.041 issued rwts: total=6011,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.041 00:09:46.041 Run status group 0 (all jobs): 00:09:46.041 READ: bw=90.4MiB/s (94.8MB/s), 9.97MiB/s-31.3MiB/s (10.5MB/s-32.8MB/s), io=91.0MiB (95.4MB), run=1002-1006msec 00:09:46.041 WRITE: bw=95.3MiB/s (100.0MB/s), 11.9MiB/s-31.8MiB/s (12.5MB/s-33.4MB/s), io=95.9MiB (101MB), run=1002-1006msec 00:09:46.041 00:09:46.041 Disk stats (read/write): 00:09:46.041 nvme0n1: ios=6194/6223, merge=0/0, ticks=50762/44481, in_queue=95243, util=82.77% 00:09:46.041 nvme0n2: ios=5662/5715, merge=0/0, ticks=50620/44460, in_queue=95080, util=83.57% 00:09:46.041 nvme0n3: ios=2066/2155, merge=0/0, ticks=13981/18756, in_queue=32737, util=98.68% 00:09:46.041 nvme0n4: ios=4628/4608, merge=0/0, ticks=16972/15603, in_queue=32575, util=97.30% 00:09:46.041 13:06:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:46.041 13:06:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1586604 00:09:46.041 13:06:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:46.041 13:06:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:46.041 [global] 00:09:46.041 thread=1 00:09:46.041 invalidate=1 00:09:46.041 rw=read 00:09:46.041 time_based=1 00:09:46.041 runtime=10 00:09:46.041 ioengine=libaio 00:09:46.041 direct=1 00:09:46.041 bs=4096 00:09:46.041 iodepth=1 00:09:46.041 norandommap=1 00:09:46.041 numjobs=1 00:09:46.041 00:09:46.041 [job0] 00:09:46.041 filename=/dev/nvme0n1 00:09:46.041 [job1] 00:09:46.041 filename=/dev/nvme0n2 00:09:46.041 [job2] 00:09:46.041 filename=/dev/nvme0n3 00:09:46.041 [job3] 00:09:46.041 filename=/dev/nvme0n4 00:09:46.041 Could not set queue depth (nvme0n1) 00:09:46.041 Could not set queue depth (nvme0n2) 00:09:46.041 Could not set queue depth (nvme0n3) 00:09:46.041 Could not set queue depth (nvme0n4) 00:09:46.612 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.612 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.612 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.612 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.612 fio-3.35 00:09:46.612 Starting 4 threads 00:09:49.157 13:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:49.157 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7122944, buflen=4096 00:09:49.157 fio: pid=1586798, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.157 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:49.417 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=282624, buflen=4096 00:09:49.417 fio: pid=1586797, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.417 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.417 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:49.677 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.677 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:49.677 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=294912, buflen=4096 00:09:49.677 fio: pid=1586794, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:49.677 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.677 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:49.677 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7450624, buflen=4096 00:09:49.677 fio: pid=1586796, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:49.938 00:09:49.938 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1586794: Wed Nov 6 13:06:31 2024 00:09:49.938 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(288KiB/2978msec) 00:09:49.938 slat (usec): min=23, max=7076, avg=296.28, stdev=1309.09 00:09:49.938 clat (usec): min=721, max=43020, avg=40728.43, stdev=6823.05 00:09:49.938 lat (usec): min=759, max=48425, avg=40930.53, stdev=6447.17 00:09:49.938 clat percentiles (usec): 00:09:49.938 | 1.00th=[ 725], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:49.938 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:49.938 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:09:49.938 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:49.938 | 99.99th=[43254] 00:09:49.938 bw ( KiB/s): min= 96, max= 96, per=2.04%, avg=96.00, stdev= 0.00, samples=5 00:09:49.938 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:09:49.938 lat (usec) : 750=1.37%, 1000=1.37% 00:09:49.938 lat (msec) : 50=95.89% 00:09:49.938 cpu : usr=0.00%, sys=0.27%, ctx=77, majf=0, minf=1 00:09:49.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.938 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.938 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.938 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1586796: Wed Nov 6 13:06:31 2024 00:09:49.938 read: IOPS=579, BW=2315KiB/s (2371kB/s)(7276KiB/3143msec) 00:09:49.938 slat (usec): min=7, max=16413, avg=52.82, stdev=559.70 00:09:49.938 clat (usec): min=421, max=45041, avg=1656.42, stdev=5186.33 00:09:49.938 lat (usec): min=447, max=45067, avg=1709.26, stdev=5213.68 00:09:49.938 clat percentiles (usec): 00:09:49.938 | 1.00th=[ 635], 5.00th=[ 750], 10.00th=[ 783], 20.00th=[ 889], 00:09:49.938 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 1004], 00:09:49.938 | 70.00th=[ 1074], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1221], 00:09:49.938 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[44827], 00:09:49.938 | 99.99th=[44827] 00:09:49.938 bw ( KiB/s): min= 800, max= 4392, per=50.34%, avg=2370.67, stdev=1550.52, samples=6 00:09:49.938 iops : min= 200, max= 1098, avg=592.67, stdev=387.63, samples=6 00:09:49.938 lat (usec) : 500=0.11%, 750=4.78%, 1000=53.52% 00:09:49.938 lat (msec) : 2=39.84%, 4=0.05%, 50=1.65% 00:09:49.938 cpu : usr=0.57%, sys=1.75%, ctx=1826, majf=0, minf=1 00:09:49.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.939 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.939 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.939 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1586797: Wed Nov 6 13:06:31 2024 00:09:49.939 read: IOPS=25, BW=100KiB/s (103kB/s)(276KiB/2755msec) 00:09:49.939 slat (usec): min=25, max=8584, avg=149.85, stdev=1022.80 00:09:49.939 clat (usec): min=796, max=43053, avg=39456.74, stdev=9649.75 00:09:49.939 lat (usec): min=829, max=49966, avg=39608.36, stdev=9725.01 00:09:49.939 clat percentiles (usec): 00:09:49.939 | 1.00th=[ 799], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41157], 00:09:49.939 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:49.939 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:09:49.939 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:49.939 | 99.99th=[43254] 00:09:49.939 bw ( KiB/s): min= 96, max= 112, per=2.10%, avg=99.20, stdev= 7.16, samples=5 00:09:49.939 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:09:49.939 lat (usec) : 1000=4.29% 00:09:49.939 lat (msec) : 2=1.43%, 50=92.86% 00:09:49.939 cpu : usr=0.00%, sys=0.15%, ctx=72, majf=0, minf=1 00:09:49.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.939 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.939 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.939 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1586798: Wed Nov 6 13:06:31 2024 00:09:49.939 read: IOPS=678, BW=2714KiB/s (2779kB/s)(6956KiB/2563msec) 00:09:49.939 slat (usec): min=6, max=550, avg=27.61, stdev=13.06 00:09:49.939 clat (usec): min=208, max=42931, avg=1427.43, stdev=4378.40 00:09:49.939 lat (usec): min=235, max=42959, avg=1455.03, stdev=4378.37 00:09:49.939 clat percentiles (usec): 00:09:49.939 | 1.00th=[ 502], 5.00th=[ 791], 10.00th=[ 865], 20.00th=[ 914], 00:09:49.939 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 979], 00:09:49.939 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1074], 00:09:49.939 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:09:49.939 | 99.99th=[42730] 00:09:49.939 bw ( KiB/s): min= 584, max= 4160, per=57.78%, avg=2720.00, stdev=1843.32, samples=5 00:09:49.939 iops : min= 146, max= 1040, avg=680.00, stdev=460.83, samples=5 00:09:49.939 lat (usec) : 250=0.11%, 500=0.80%, 750=1.90%, 1000=70.23% 00:09:49.939 lat (msec) : 2=25.69%, 10=0.06%, 50=1.15% 00:09:49.939 cpu : usr=1.72%, sys=2.26%, ctx=1740, majf=0, minf=2 00:09:49.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.939 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.939 issued rwts: total=1740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.939 00:09:49.939 Run status group 0 (all jobs): 00:09:49.939 READ: bw=4708KiB/s (4821kB/s), 96.7KiB/s-2714KiB/s (99.0kB/s-2779kB/s), io=14.4MiB (15.2MB), run=2563-3143msec 00:09:49.939 00:09:49.939 Disk stats (read/write): 00:09:49.939 nvme0n1: ios=68/0, merge=0/0, ticks=2766/0, in_queue=2766, util=92.92% 00:09:49.939 nvme0n2: ios=1786/0, merge=0/0, ticks=2859/0, in_queue=2859, util=93.22% 00:09:49.939 nvme0n3: ios=63/0, merge=0/0, ticks=2516/0, in_queue=2516, util=95.64% 00:09:49.939 nvme0n4: ios=1739/0, merge=0/0, ticks=2355/0, in_queue=2355, util=96.33% 00:09:49.939 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.939 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:50.200 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.200 13:06:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:50.461 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.461 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:50.461 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.461 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1586604 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:50.722 nvmf hotplug test: fio failed as expected 00:09:50.722 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.983 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.984 rmmod nvme_tcp 00:09:50.984 rmmod nvme_fabrics 00:09:50.984 rmmod nvme_keyring 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1582848 ']' 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1582848 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1582848 ']' 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1582848 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:50.984 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1582848 00:09:51.244 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:51.244 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:51.244 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1582848' 00:09:51.244 killing process with pid 1582848 00:09:51.244 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1582848 00:09:51.244 13:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1582848 00:09:51.244 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.244 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.244 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.244 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:51.245 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:51.245 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.245 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.245 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.245 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.245 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.245 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.245 13:06:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.789 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.789 00:09:53.789 real 0m29.676s 00:09:53.789 user 2m35.477s 00:09:53.789 sys 0m9.808s 00:09:53.789 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.789 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.789 ************************************ 00:09:53.790 END TEST nvmf_fio_target 00:09:53.790 ************************************ 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.790 ************************************ 00:09:53.790 START TEST nvmf_bdevio 00:09:53.790 ************************************ 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:53.790 * Looking for test storage... 00:09:53.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:53.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.790 --rc genhtml_branch_coverage=1 00:09:53.790 --rc genhtml_function_coverage=1 00:09:53.790 --rc genhtml_legend=1 00:09:53.790 --rc geninfo_all_blocks=1 00:09:53.790 --rc geninfo_unexecuted_blocks=1 00:09:53.790 00:09:53.790 ' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:53.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.790 --rc genhtml_branch_coverage=1 00:09:53.790 --rc genhtml_function_coverage=1 00:09:53.790 --rc genhtml_legend=1 00:09:53.790 --rc geninfo_all_blocks=1 00:09:53.790 --rc geninfo_unexecuted_blocks=1 00:09:53.790 00:09:53.790 ' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:53.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.790 --rc genhtml_branch_coverage=1 00:09:53.790 --rc genhtml_function_coverage=1 00:09:53.790 --rc genhtml_legend=1 00:09:53.790 --rc geninfo_all_blocks=1 00:09:53.790 --rc geninfo_unexecuted_blocks=1 00:09:53.790 00:09:53.790 ' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:53.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.790 --rc genhtml_branch_coverage=1 00:09:53.790 --rc genhtml_function_coverage=1 00:09:53.790 --rc genhtml_legend=1 00:09:53.790 --rc geninfo_all_blocks=1 00:09:53.790 --rc geninfo_unexecuted_blocks=1 00:09:53.790 00:09:53.790 ' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.790 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.791 13:06:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.934 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:01.935 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:01.935 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:01.935 Found net devices under 0000:31:00.0: cvl_0_0 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:01.935 Found net devices under 0000:31:00.1: cvl_0_1 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:10:01.935 00:10:01.935 --- 10.0.0.2 ping statistics --- 00:10:01.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.935 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:10:01.935 00:10:01.935 --- 10.0.0.1 ping statistics --- 00:10:01.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.935 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.935 13:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.935 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1591970 00:10:01.935 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1591970 00:10:01.935 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:01.935 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1591970 ']' 00:10:01.935 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.935 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:01.935 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.935 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:01.935 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.935 [2024-11-06 13:06:43.059657] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:10:01.935 [2024-11-06 13:06:43.059725] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.935 [2024-11-06 13:06:43.160769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.935 [2024-11-06 13:06:43.211923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.935 [2024-11-06 13:06:43.211966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.935 [2024-11-06 13:06:43.211975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.935 [2024-11-06 13:06:43.211982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.935 [2024-11-06 13:06:43.211988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.935 [2024-11-06 13:06:43.213996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:01.935 [2024-11-06 13:06:43.214283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:01.936 [2024-11-06 13:06:43.214442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:01.936 [2024-11-06 13:06:43.214444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.197 [2024-11-06 13:06:43.941555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.197 Malloc0 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.197 13:06:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.197 [2024-11-06 13:06:44.014055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:02.197 { 00:10:02.197 "params": { 00:10:02.197 "name": "Nvme$subsystem", 00:10:02.197 "trtype": "$TEST_TRANSPORT", 00:10:02.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.197 "adrfam": "ipv4", 00:10:02.197 "trsvcid": "$NVMF_PORT", 00:10:02.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.197 "hdgst": ${hdgst:-false}, 00:10:02.197 "ddgst": ${ddgst:-false} 00:10:02.197 }, 00:10:02.197 "method": "bdev_nvme_attach_controller" 00:10:02.197 } 00:10:02.197 EOF 00:10:02.197 )") 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:02.197 13:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:02.197 "params": { 00:10:02.197 "name": "Nvme1", 00:10:02.197 "trtype": "tcp", 00:10:02.197 "traddr": "10.0.0.2", 00:10:02.197 "adrfam": "ipv4", 00:10:02.197 "trsvcid": "4420", 00:10:02.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.197 "hdgst": false, 00:10:02.197 "ddgst": false 00:10:02.197 }, 00:10:02.197 "method": "bdev_nvme_attach_controller" 00:10:02.197 }' 00:10:02.197 [2024-11-06 13:06:44.073177] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:10:02.197 [2024-11-06 13:06:44.073242] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592221 ] 00:10:02.458 [2024-11-06 13:06:44.168252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.458 [2024-11-06 13:06:44.225792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.458 [2024-11-06 13:06:44.225900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.458 [2024-11-06 13:06:44.225902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.719 I/O targets: 00:10:02.719 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:02.719 00:10:02.719 00:10:02.719 CUnit - A unit testing framework for C - Version 2.1-3 00:10:02.719 http://cunit.sourceforge.net/ 00:10:02.719 00:10:02.719 00:10:02.719 Suite: bdevio tests on: Nvme1n1 00:10:02.719 Test: blockdev write read block ...passed 00:10:02.982 Test: blockdev write zeroes read block ...passed 00:10:02.982 Test: blockdev write zeroes read no split ...passed 00:10:02.982 Test: blockdev write zeroes read split ...passed 00:10:02.982 Test: blockdev write zeroes read split partial ...passed 00:10:02.982 Test: blockdev reset ...[2024-11-06 13:06:44.733407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:02.982 [2024-11-06 13:06:44.733512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21881c0 (9): Bad file descriptor 00:10:02.982 [2024-11-06 13:06:44.754713] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:02.982 passed 00:10:02.982 Test: blockdev write read 8 blocks ...passed 00:10:02.982 Test: blockdev write read size > 128k ...passed 00:10:02.982 Test: blockdev write read invalid size ...passed 00:10:02.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:02.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:02.982 Test: blockdev write read max offset ...passed 00:10:03.252 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:03.252 Test: blockdev writev readv 8 blocks ...passed 00:10:03.252 Test: blockdev writev readv 30 x 1block ...passed 00:10:03.252 Test: blockdev writev readv block ...passed 00:10:03.252 Test: blockdev writev readv size > 128k ...passed 00:10:03.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:03.252 Test: blockdev comparev and writev ...[2024-11-06 13:06:44.978304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.252 [2024-11-06 13:06:44.978353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:44.978371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.252 [2024-11-06 13:06:44.978380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:44.978976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.252 [2024-11-06 13:06:44.978993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:44.979009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.252 [2024-11-06 13:06:44.979017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:44.979593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.252 [2024-11-06 13:06:44.979608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:44.979622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.252 [2024-11-06 13:06:44.979629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:44.980206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.252 [2024-11-06 13:06:44.980221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:44.980235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.252 [2024-11-06 13:06:44.980242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:03.252 passed 00:10:03.252 Test: blockdev nvme passthru rw ...passed 00:10:03.252 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:06:45.064662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.252 [2024-11-06 13:06:45.064679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:45.065059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.252 [2024-11-06 13:06:45.065073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:45.065462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.252 [2024-11-06 13:06:45.065475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:03.252 [2024-11-06 13:06:45.065841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.252 [2024-11-06 13:06:45.065857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:03.252 passed 00:10:03.252 Test: blockdev nvme admin passthru ...passed 00:10:03.252 Test: blockdev copy ...passed 00:10:03.252 00:10:03.252 Run Summary: Type Total Ran Passed Failed Inactive 00:10:03.252 suites 1 1 n/a 0 0 00:10:03.252 tests 23 23 23 0 0 00:10:03.252 asserts 152 152 152 0 n/a 00:10:03.252 00:10:03.252 Elapsed time = 1.144 seconds 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.541 rmmod nvme_tcp 00:10:03.541 rmmod nvme_fabrics 00:10:03.541 rmmod nvme_keyring 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1591970 ']' 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1591970 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1591970 ']' 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1591970 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1591970 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1591970' 00:10:03.541 killing process with pid 1591970 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1591970 00:10:03.541 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1591970 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.837 13:06:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.771 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.771 00:10:05.771 real 0m12.393s 00:10:05.771 user 0m13.914s 00:10:05.771 sys 0m6.310s 00:10:05.772 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.772 13:06:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.772 ************************************ 00:10:05.772 END TEST nvmf_bdevio 00:10:05.772 ************************************ 00:10:05.772 13:06:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:05.772 00:10:05.772 real 5m6.353s 00:10:05.772 user 11m47.467s 00:10:05.772 sys 1m52.202s 00:10:05.772 13:06:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.772 13:06:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.772 ************************************ 00:10:05.772 END TEST nvmf_target_core 00:10:05.772 ************************************ 00:10:06.033 13:06:47 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:06.033 13:06:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:06.033 13:06:47 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.033 13:06:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.033 ************************************ 00:10:06.033 START TEST nvmf_target_extra 00:10:06.033 ************************************ 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:06.033 * Looking for test storage... 00:10:06.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:06.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.033 --rc genhtml_branch_coverage=1 00:10:06.033 --rc genhtml_function_coverage=1 00:10:06.033 --rc genhtml_legend=1 00:10:06.033 --rc geninfo_all_blocks=1 00:10:06.033 --rc geninfo_unexecuted_blocks=1 00:10:06.033 00:10:06.033 ' 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:06.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.033 --rc genhtml_branch_coverage=1 00:10:06.033 --rc genhtml_function_coverage=1 00:10:06.033 --rc genhtml_legend=1 00:10:06.033 --rc geninfo_all_blocks=1 00:10:06.033 --rc geninfo_unexecuted_blocks=1 00:10:06.033 00:10:06.033 ' 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:06.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.033 --rc genhtml_branch_coverage=1 00:10:06.033 --rc genhtml_function_coverage=1 00:10:06.033 --rc genhtml_legend=1 00:10:06.033 --rc geninfo_all_blocks=1 00:10:06.033 --rc geninfo_unexecuted_blocks=1 00:10:06.033 00:10:06.033 ' 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:06.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.033 --rc genhtml_branch_coverage=1 00:10:06.033 --rc genhtml_function_coverage=1 00:10:06.033 --rc genhtml_legend=1 00:10:06.033 --rc geninfo_all_blocks=1 00:10:06.033 --rc geninfo_unexecuted_blocks=1 00:10:06.033 00:10:06.033 ' 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.033 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:06.295 ************************************ 00:10:06.295 START TEST nvmf_example 00:10:06.295 ************************************ 00:10:06.295 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:06.295 * Looking for test storage... 00:10:06.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.295 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.296 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:06.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.558 --rc genhtml_branch_coverage=1 00:10:06.558 --rc genhtml_function_coverage=1 00:10:06.558 --rc genhtml_legend=1 00:10:06.558 --rc geninfo_all_blocks=1 00:10:06.558 --rc geninfo_unexecuted_blocks=1 00:10:06.558 00:10:06.558 ' 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:06.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.558 --rc genhtml_branch_coverage=1 00:10:06.558 --rc genhtml_function_coverage=1 00:10:06.558 --rc genhtml_legend=1 00:10:06.558 --rc geninfo_all_blocks=1 00:10:06.558 --rc geninfo_unexecuted_blocks=1 00:10:06.558 00:10:06.558 ' 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:06.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.558 --rc genhtml_branch_coverage=1 00:10:06.558 --rc genhtml_function_coverage=1 00:10:06.558 --rc genhtml_legend=1 00:10:06.558 --rc geninfo_all_blocks=1 00:10:06.558 --rc geninfo_unexecuted_blocks=1 00:10:06.558 00:10:06.558 ' 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:06.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.558 --rc genhtml_branch_coverage=1 00:10:06.558 --rc genhtml_function_coverage=1 00:10:06.558 --rc genhtml_legend=1 00:10:06.558 --rc geninfo_all_blocks=1 00:10:06.558 --rc geninfo_unexecuted_blocks=1 00:10:06.558 00:10:06.558 ' 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.558 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.559 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:14.701 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:14.701 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:14.701 Found net devices under 0000:31:00.0: cvl_0_0 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:14.701 Found net devices under 0000:31:00.1: cvl_0_1 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.701 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:10:14.702 00:10:14.702 --- 10.0.0.2 ping statistics --- 00:10:14.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.702 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:10:14.702 00:10:14.702 --- 10.0.0.1 ping statistics --- 00:10:14.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.702 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1596979 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1596979 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 1596979 ']' 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:14.702 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.963 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:15.224 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:25.226 Initializing NVMe Controllers 00:10:25.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:25.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:25.226 Initialization complete. Launching workers. 00:10:25.226 ======================================================== 00:10:25.226 Latency(us) 00:10:25.226 Device Information : IOPS MiB/s Average min max 00:10:25.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18771.37 73.33 3409.04 527.58 15459.54 00:10:25.226 ======================================================== 00:10:25.226 Total : 18771.37 73.33 3409.04 527.58 15459.54 00:10:25.226 00:10:25.226 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:25.226 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:25.226 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:25.226 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:25.226 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:25.226 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:25.226 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.226 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:25.226 rmmod nvme_tcp 00:10:25.226 rmmod nvme_fabrics 00:10:25.226 rmmod nvme_keyring 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1596979 ']' 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1596979 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 1596979 ']' 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 1596979 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1596979 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1596979' 00:10:25.488 killing process with pid 1596979 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 1596979 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 1596979 00:10:25.488 nvmf threads initialize successfully 00:10:25.488 bdev subsystem init successfully 00:10:25.488 created a nvmf target service 00:10:25.488 create targets's poll groups done 00:10:25.488 all subsystems of target started 00:10:25.488 nvmf target is running 00:10:25.488 all subsystems of target stopped 00:10:25.488 destroy targets's poll groups done 00:10:25.488 destroyed the nvmf target service 00:10:25.488 bdev subsystem finish successfully 00:10:25.488 nvmf threads destroy successfully 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.488 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.035 00:10:28.035 real 0m21.458s 00:10:28.035 user 0m46.094s 00:10:28.035 sys 0m7.109s 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:28.035 ************************************ 00:10:28.035 END TEST nvmf_example 00:10:28.035 ************************************ 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:28.035 ************************************ 00:10:28.035 START TEST nvmf_filesystem 00:10:28.035 ************************************ 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:28.035 * Looking for test storage... 00:10:28.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:28.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.035 --rc genhtml_branch_coverage=1 00:10:28.035 --rc genhtml_function_coverage=1 00:10:28.035 --rc genhtml_legend=1 00:10:28.035 --rc geninfo_all_blocks=1 00:10:28.035 --rc geninfo_unexecuted_blocks=1 00:10:28.035 00:10:28.035 ' 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:28.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.035 --rc genhtml_branch_coverage=1 00:10:28.035 --rc genhtml_function_coverage=1 00:10:28.035 --rc genhtml_legend=1 00:10:28.035 --rc geninfo_all_blocks=1 00:10:28.035 --rc geninfo_unexecuted_blocks=1 00:10:28.035 00:10:28.035 ' 00:10:28.035 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:28.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.035 --rc genhtml_branch_coverage=1 00:10:28.035 --rc genhtml_function_coverage=1 00:10:28.035 --rc genhtml_legend=1 00:10:28.035 --rc geninfo_all_blocks=1 00:10:28.035 --rc geninfo_unexecuted_blocks=1 00:10:28.035 00:10:28.035 ' 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:28.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.036 --rc genhtml_branch_coverage=1 00:10:28.036 --rc genhtml_function_coverage=1 00:10:28.036 --rc genhtml_legend=1 00:10:28.036 --rc geninfo_all_blocks=1 00:10:28.036 --rc geninfo_unexecuted_blocks=1 00:10:28.036 00:10:28.036 ' 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.036 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:28.037 #define SPDK_CONFIG_H 00:10:28.037 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:28.037 #define SPDK_CONFIG_APPS 1 00:10:28.037 #define SPDK_CONFIG_ARCH native 00:10:28.037 #undef SPDK_CONFIG_ASAN 00:10:28.037 #undef SPDK_CONFIG_AVAHI 00:10:28.037 #undef SPDK_CONFIG_CET 00:10:28.037 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:28.037 #define SPDK_CONFIG_COVERAGE 1 00:10:28.037 #define SPDK_CONFIG_CROSS_PREFIX 00:10:28.037 #undef SPDK_CONFIG_CRYPTO 00:10:28.037 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:28.037 #undef SPDK_CONFIG_CUSTOMOCF 00:10:28.037 #undef SPDK_CONFIG_DAOS 00:10:28.037 #define SPDK_CONFIG_DAOS_DIR 00:10:28.037 #define SPDK_CONFIG_DEBUG 1 00:10:28.037 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:28.037 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.037 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:28.037 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:28.037 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:28.037 #undef SPDK_CONFIG_DPDK_UADK 00:10:28.037 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.037 #define SPDK_CONFIG_EXAMPLES 1 00:10:28.037 #undef SPDK_CONFIG_FC 00:10:28.037 #define SPDK_CONFIG_FC_PATH 00:10:28.037 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:28.037 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:28.037 #define SPDK_CONFIG_FSDEV 1 00:10:28.037 #undef SPDK_CONFIG_FUSE 00:10:28.037 #undef SPDK_CONFIG_FUZZER 00:10:28.037 #define SPDK_CONFIG_FUZZER_LIB 00:10:28.037 #undef SPDK_CONFIG_GOLANG 00:10:28.037 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:28.037 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:28.037 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:28.037 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:28.037 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:28.037 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:28.037 #undef SPDK_CONFIG_HAVE_LZ4 00:10:28.037 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:28.037 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:28.037 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:28.037 #define SPDK_CONFIG_IDXD 1 00:10:28.037 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:28.037 #undef SPDK_CONFIG_IPSEC_MB 00:10:28.037 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:28.037 #define SPDK_CONFIG_ISAL 1 00:10:28.037 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:28.037 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:28.037 #define SPDK_CONFIG_LIBDIR 00:10:28.037 #undef SPDK_CONFIG_LTO 00:10:28.037 #define SPDK_CONFIG_MAX_LCORES 128 00:10:28.037 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:28.037 #define SPDK_CONFIG_NVME_CUSE 1 00:10:28.037 #undef SPDK_CONFIG_OCF 00:10:28.037 #define SPDK_CONFIG_OCF_PATH 00:10:28.037 #define SPDK_CONFIG_OPENSSL_PATH 00:10:28.037 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:28.037 #define SPDK_CONFIG_PGO_DIR 00:10:28.037 #undef SPDK_CONFIG_PGO_USE 00:10:28.037 #define SPDK_CONFIG_PREFIX /usr/local 00:10:28.037 #undef SPDK_CONFIG_RAID5F 00:10:28.037 #undef SPDK_CONFIG_RBD 00:10:28.037 #define SPDK_CONFIG_RDMA 1 00:10:28.037 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:28.037 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:28.037 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:28.037 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:28.037 #define SPDK_CONFIG_SHARED 1 00:10:28.037 #undef SPDK_CONFIG_SMA 00:10:28.037 #define SPDK_CONFIG_TESTS 1 00:10:28.037 #undef SPDK_CONFIG_TSAN 00:10:28.037 #define SPDK_CONFIG_UBLK 1 00:10:28.037 #define SPDK_CONFIG_UBSAN 1 00:10:28.037 #undef SPDK_CONFIG_UNIT_TESTS 00:10:28.037 #undef SPDK_CONFIG_URING 00:10:28.037 #define SPDK_CONFIG_URING_PATH 00:10:28.037 #undef SPDK_CONFIG_URING_ZNS 00:10:28.037 #undef SPDK_CONFIG_USDT 00:10:28.037 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:28.037 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:28.037 #define SPDK_CONFIG_VFIO_USER 1 00:10:28.037 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:28.037 #define SPDK_CONFIG_VHOST 1 00:10:28.037 #define SPDK_CONFIG_VIRTIO 1 00:10:28.037 #undef SPDK_CONFIG_VTUNE 00:10:28.037 #define SPDK_CONFIG_VTUNE_DIR 00:10:28.037 #define SPDK_CONFIG_WERROR 1 00:10:28.037 #define SPDK_CONFIG_WPDK_DIR 00:10:28.037 #undef SPDK_CONFIG_XNVME 00:10:28.037 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:28.037 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:28.038 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:28.039 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1599764 ]] 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1599764 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.ZOg9Mh 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ZOg9Mh/tests/target /tmp/spdk.ZOg9Mh 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=434749440 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4849680384 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123398742016 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356517376 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5957775360 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668225536 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23371776 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=387072 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=116736 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677785600 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678260736 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=475136 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:28.040 * Looking for test storage... 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123398742016 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:28.040 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8172367872 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:28.041 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:28.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.302 --rc genhtml_branch_coverage=1 00:10:28.302 --rc genhtml_function_coverage=1 00:10:28.302 --rc genhtml_legend=1 00:10:28.302 --rc geninfo_all_blocks=1 00:10:28.302 --rc geninfo_unexecuted_blocks=1 00:10:28.302 00:10:28.302 ' 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:28.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.302 --rc genhtml_branch_coverage=1 00:10:28.302 --rc genhtml_function_coverage=1 00:10:28.302 --rc genhtml_legend=1 00:10:28.302 --rc geninfo_all_blocks=1 00:10:28.302 --rc geninfo_unexecuted_blocks=1 00:10:28.302 00:10:28.302 ' 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:28.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.302 --rc genhtml_branch_coverage=1 00:10:28.302 --rc genhtml_function_coverage=1 00:10:28.302 --rc genhtml_legend=1 00:10:28.302 --rc geninfo_all_blocks=1 00:10:28.302 --rc geninfo_unexecuted_blocks=1 00:10:28.302 00:10:28.302 ' 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:28.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.302 --rc genhtml_branch_coverage=1 00:10:28.302 --rc genhtml_function_coverage=1 00:10:28.302 --rc genhtml_legend=1 00:10:28.302 --rc geninfo_all_blocks=1 00:10:28.302 --rc geninfo_unexecuted_blocks=1 00:10:28.302 00:10:28.302 ' 00:10:28.302 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.302 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.303 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.445 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:36.446 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:36.446 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:36.446 Found net devices under 0000:31:00.0: cvl_0_0 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:36.446 Found net devices under 0000:31:00.1: cvl_0_1 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:36.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:10:36.446 00:10:36.446 --- 10.0.0.2 ping statistics --- 00:10:36.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.446 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:10:36.446 00:10:36.446 --- 10.0.0.1 ping statistics --- 00:10:36.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.446 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.446 ************************************ 00:10:36.446 START TEST nvmf_filesystem_no_in_capsule 00:10:36.446 ************************************ 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1603445 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1603445 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1603445 ']' 00:10:36.446 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.447 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:36.447 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.447 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:36.447 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.447 [2024-11-06 13:07:17.668971] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:10:36.447 [2024-11-06 13:07:17.669023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.447 [2024-11-06 13:07:17.765181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.447 [2024-11-06 13:07:17.800792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.447 [2024-11-06 13:07:17.800826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.447 [2024-11-06 13:07:17.800834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.447 [2024-11-06 13:07:17.800841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.447 [2024-11-06 13:07:17.800846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.447 [2024-11-06 13:07:17.803172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.447 [2024-11-06 13:07:17.803527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.447 [2024-11-06 13:07:17.803677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.447 [2024-11-06 13:07:17.803677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.707 [2024-11-06 13:07:18.517252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.707 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:36.708 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.708 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.969 Malloc1 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.969 [2024-11-06 13:07:18.646150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:36.969 { 00:10:36.969 "name": "Malloc1", 00:10:36.969 "aliases": [ 00:10:36.969 "20c55c0d-afe4-438f-8a2a-843d2f4c9207" 00:10:36.969 ], 00:10:36.969 "product_name": "Malloc disk", 00:10:36.969 "block_size": 512, 00:10:36.969 "num_blocks": 1048576, 00:10:36.969 "uuid": "20c55c0d-afe4-438f-8a2a-843d2f4c9207", 00:10:36.969 "assigned_rate_limits": { 00:10:36.969 "rw_ios_per_sec": 0, 00:10:36.969 "rw_mbytes_per_sec": 0, 00:10:36.969 "r_mbytes_per_sec": 0, 00:10:36.969 "w_mbytes_per_sec": 0 00:10:36.969 }, 00:10:36.969 "claimed": true, 00:10:36.969 "claim_type": "exclusive_write", 00:10:36.969 "zoned": false, 00:10:36.969 "supported_io_types": { 00:10:36.969 "read": true, 00:10:36.969 "write": true, 00:10:36.969 "unmap": true, 00:10:36.969 "flush": true, 00:10:36.969 "reset": true, 00:10:36.969 "nvme_admin": false, 00:10:36.969 "nvme_io": false, 00:10:36.969 "nvme_io_md": false, 00:10:36.969 "write_zeroes": true, 00:10:36.969 "zcopy": true, 00:10:36.969 "get_zone_info": false, 00:10:36.969 "zone_management": false, 00:10:36.969 "zone_append": false, 00:10:36.969 "compare": false, 00:10:36.969 "compare_and_write": false, 00:10:36.969 "abort": true, 00:10:36.969 "seek_hole": false, 00:10:36.969 "seek_data": false, 00:10:36.969 "copy": true, 00:10:36.969 "nvme_iov_md": false 00:10:36.969 }, 00:10:36.969 "memory_domains": [ 00:10:36.969 { 00:10:36.969 "dma_device_id": "system", 00:10:36.969 "dma_device_type": 1 00:10:36.969 }, 00:10:36.969 { 00:10:36.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.969 "dma_device_type": 2 00:10:36.969 } 00:10:36.969 ], 00:10:36.969 "driver_specific": {} 00:10:36.969 } 00:10:36.969 ]' 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:36.969 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:38.899 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.899 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:38.899 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.899 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:38.899 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:40.814 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:41.074 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.016 ************************************ 00:10:42.016 START TEST filesystem_ext4 00:10:42.016 ************************************ 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:42.016 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:42.016 mke2fs 1.47.0 (5-Feb-2023) 00:10:42.276 Discarding device blocks: 0/522240 done 00:10:42.276 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:42.276 Filesystem UUID: accd5f79-46da-47cf-8555-d73da2fcc705 00:10:42.276 Superblock backups stored on blocks: 00:10:42.276 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:42.276 00:10:42.276 Allocating group tables: 0/64 done 00:10:42.276 Writing inode tables: 0/64 done 00:10:43.659 Creating journal (8192 blocks): done 00:10:45.433 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:10:45.433 00:10:45.433 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:45.433 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1603445 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.019 00:10:52.019 real 0m9.441s 00:10:52.019 user 0m0.039s 00:10:52.019 sys 0m0.070s 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:52.019 ************************************ 00:10:52.019 END TEST filesystem_ext4 00:10:52.019 ************************************ 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.019 ************************************ 00:10:52.019 START TEST filesystem_btrfs 00:10:52.019 ************************************ 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:52.019 btrfs-progs v6.8.1 00:10:52.019 See https://btrfs.readthedocs.io for more information. 00:10:52.019 00:10:52.019 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:52.019 NOTE: several default settings have changed in version 5.15, please make sure 00:10:52.019 this does not affect your deployments: 00:10:52.019 - DUP for metadata (-m dup) 00:10:52.019 - enabled no-holes (-O no-holes) 00:10:52.019 - enabled free-space-tree (-R free-space-tree) 00:10:52.019 00:10:52.019 Label: (null) 00:10:52.019 UUID: 9c7f318a-3cf1-4c8c-9208-0b04a9dc6f7f 00:10:52.019 Node size: 16384 00:10:52.019 Sector size: 4096 (CPU page size: 4096) 00:10:52.019 Filesystem size: 510.00MiB 00:10:52.019 Block group profiles: 00:10:52.019 Data: single 8.00MiB 00:10:52.019 Metadata: DUP 32.00MiB 00:10:52.019 System: DUP 8.00MiB 00:10:52.019 SSD detected: yes 00:10:52.019 Zoned device: no 00:10:52.019 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:52.019 Checksum: crc32c 00:10:52.019 Number of devices: 1 00:10:52.019 Devices: 00:10:52.019 ID SIZE PATH 00:10:52.019 1 510.00MiB /dev/nvme0n1p1 00:10:52.019 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:52.019 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.590 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.590 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.590 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.590 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.590 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.590 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.590 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1603445 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.851 00:10:52.851 real 0m1.097s 00:10:52.851 user 0m0.029s 00:10:52.851 sys 0m0.116s 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.851 ************************************ 00:10:52.851 END TEST filesystem_btrfs 00:10:52.851 ************************************ 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.851 ************************************ 00:10:52.851 START TEST filesystem_xfs 00:10:52.851 ************************************ 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:52.851 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:52.851 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:52.851 = sectsz=512 attr=2, projid32bit=1 00:10:52.851 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:52.851 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:52.851 data = bsize=4096 blocks=130560, imaxpct=25 00:10:52.851 = sunit=0 swidth=0 blks 00:10:52.851 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:52.851 log =internal log bsize=4096 blocks=16384, version=2 00:10:52.851 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:52.851 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:53.793 Discarding blocks...Done. 00:10:53.793 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:53.793 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.337 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1603445 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.337 00:10:56.337 real 0m3.555s 00:10:56.337 user 0m0.030s 00:10:56.337 sys 0m0.074s 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.337 ************************************ 00:10:56.337 END TEST filesystem_xfs 00:10:56.337 ************************************ 00:10:56.337 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:56.598 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:56.598 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1603445 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1603445 ']' 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1603445 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1603445 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1603445' 00:10:56.860 killing process with pid 1603445 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 1603445 00:10:56.860 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 1603445 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:57.122 00:10:57.122 real 0m21.248s 00:10:57.122 user 1m24.069s 00:10:57.122 sys 0m1.422s 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.122 ************************************ 00:10:57.122 END TEST nvmf_filesystem_no_in_capsule 00:10:57.122 ************************************ 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:57.122 ************************************ 00:10:57.122 START TEST nvmf_filesystem_in_capsule 00:10:57.122 ************************************ 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1607913 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1607913 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1607913 ']' 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:57.122 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.122 [2024-11-06 13:07:38.997779] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:10:57.122 [2024-11-06 13:07:38.997830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.383 [2024-11-06 13:07:39.091464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.383 [2024-11-06 13:07:39.124318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.383 [2024-11-06 13:07:39.124351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.383 [2024-11-06 13:07:39.124357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.383 [2024-11-06 13:07:39.124362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.383 [2024-11-06 13:07:39.124366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.383 [2024-11-06 13:07:39.125872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.383 [2024-11-06 13:07:39.126023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.383 [2024-11-06 13:07:39.126175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.383 [2024-11-06 13:07:39.126177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.956 [2024-11-06 13:07:39.848336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.956 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.217 Malloc1 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.217 [2024-11-06 13:07:39.980055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.217 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:58.217 { 00:10:58.217 "name": "Malloc1", 00:10:58.217 "aliases": [ 00:10:58.217 "912c4c7c-1943-4f88-b7ad-9e56c3e5215e" 00:10:58.217 ], 00:10:58.217 "product_name": "Malloc disk", 00:10:58.217 "block_size": 512, 00:10:58.217 "num_blocks": 1048576, 00:10:58.217 "uuid": "912c4c7c-1943-4f88-b7ad-9e56c3e5215e", 00:10:58.217 "assigned_rate_limits": { 00:10:58.217 "rw_ios_per_sec": 0, 00:10:58.217 "rw_mbytes_per_sec": 0, 00:10:58.217 "r_mbytes_per_sec": 0, 00:10:58.217 "w_mbytes_per_sec": 0 00:10:58.217 }, 00:10:58.217 "claimed": true, 00:10:58.217 "claim_type": "exclusive_write", 00:10:58.217 "zoned": false, 00:10:58.217 "supported_io_types": { 00:10:58.217 "read": true, 00:10:58.217 "write": true, 00:10:58.217 "unmap": true, 00:10:58.217 "flush": true, 00:10:58.217 "reset": true, 00:10:58.217 "nvme_admin": false, 00:10:58.217 "nvme_io": false, 00:10:58.217 "nvme_io_md": false, 00:10:58.217 "write_zeroes": true, 00:10:58.217 "zcopy": true, 00:10:58.217 "get_zone_info": false, 00:10:58.217 "zone_management": false, 00:10:58.217 "zone_append": false, 00:10:58.217 "compare": false, 00:10:58.217 "compare_and_write": false, 00:10:58.217 "abort": true, 00:10:58.217 "seek_hole": false, 00:10:58.217 "seek_data": false, 00:10:58.217 "copy": true, 00:10:58.217 "nvme_iov_md": false 00:10:58.217 }, 00:10:58.217 "memory_domains": [ 00:10:58.217 { 00:10:58.217 "dma_device_id": "system", 00:10:58.217 "dma_device_type": 1 00:10:58.217 }, 00:10:58.217 { 00:10:58.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.217 "dma_device_type": 2 00:10:58.217 } 00:10:58.217 ], 00:10:58.217 "driver_specific": {} 00:10:58.217 } 00:10:58.217 ]' 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:58.217 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.134 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.134 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:00.134 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.134 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:00.134 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:02.046 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:02.308 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:02.308 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:03.251 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:03.251 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:03.252 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:03.252 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.252 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.512 ************************************ 00:11:03.513 START TEST filesystem_in_capsule_ext4 00:11:03.513 ************************************ 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:03.513 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:03.513 mke2fs 1.47.0 (5-Feb-2023) 00:11:03.513 Discarding device blocks: 0/522240 done 00:11:03.513 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:03.513 Filesystem UUID: 0f8265cf-6e83-4ea6-9c68-017aa1cb7d5d 00:11:03.513 Superblock backups stored on blocks: 00:11:03.513 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:03.513 00:11:03.513 Allocating group tables: 0/64 done 00:11:03.513 Writing inode tables: 0/64 done 00:11:03.774 Creating journal (8192 blocks): done 00:11:05.989 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:05.989 00:11:05.989 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:05.989 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1607913 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:12.640 00:11:12.640 real 0m8.452s 00:11:12.640 user 0m0.040s 00:11:12.640 sys 0m0.067s 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:12.640 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:12.640 ************************************ 00:11:12.641 END TEST filesystem_in_capsule_ext4 00:11:12.641 ************************************ 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.641 ************************************ 00:11:12.641 START TEST filesystem_in_capsule_btrfs 00:11:12.641 ************************************ 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:12.641 btrfs-progs v6.8.1 00:11:12.641 See https://btrfs.readthedocs.io for more information. 00:11:12.641 00:11:12.641 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:12.641 NOTE: several default settings have changed in version 5.15, please make sure 00:11:12.641 this does not affect your deployments: 00:11:12.641 - DUP for metadata (-m dup) 00:11:12.641 - enabled no-holes (-O no-holes) 00:11:12.641 - enabled free-space-tree (-R free-space-tree) 00:11:12.641 00:11:12.641 Label: (null) 00:11:12.641 UUID: c5297f7b-1030-437d-a802-464f937e9095 00:11:12.641 Node size: 16384 00:11:12.641 Sector size: 4096 (CPU page size: 4096) 00:11:12.641 Filesystem size: 510.00MiB 00:11:12.641 Block group profiles: 00:11:12.641 Data: single 8.00MiB 00:11:12.641 Metadata: DUP 32.00MiB 00:11:12.641 System: DUP 8.00MiB 00:11:12.641 SSD detected: yes 00:11:12.641 Zoned device: no 00:11:12.641 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:12.641 Checksum: crc32c 00:11:12.641 Number of devices: 1 00:11:12.641 Devices: 00:11:12.641 ID SIZE PATH 00:11:12.641 1 510.00MiB /dev/nvme0n1p1 00:11:12.641 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:12.641 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1607913 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.211 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.211 00:11:13.211 real 0m1.226s 00:11:13.211 user 0m0.023s 00:11:13.211 sys 0m0.126s 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.212 ************************************ 00:11:13.212 END TEST filesystem_in_capsule_btrfs 00:11:13.212 ************************************ 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.212 ************************************ 00:11:13.212 START TEST filesystem_in_capsule_xfs 00:11:13.212 ************************************ 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:13.212 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:13.212 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:13.212 = sectsz=512 attr=2, projid32bit=1 00:11:13.212 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:13.212 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:13.212 data = bsize=4096 blocks=130560, imaxpct=25 00:11:13.212 = sunit=0 swidth=0 blks 00:11:13.212 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:13.212 log =internal log bsize=4096 blocks=16384, version=2 00:11:13.212 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:13.212 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:14.596 Discarding blocks...Done. 00:11:14.596 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:14.596 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.506 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.506 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:16.506 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.506 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1607913 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.506 00:11:16.506 real 0m3.056s 00:11:16.506 user 0m0.022s 00:11:16.506 sys 0m0.085s 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.506 ************************************ 00:11:16.506 END TEST filesystem_in_capsule_xfs 00:11:16.506 ************************************ 00:11:16.506 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1607913 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1607913 ']' 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1607913 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1607913 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1607913' 00:11:16.767 killing process with pid 1607913 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 1607913 00:11:16.767 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 1607913 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:17.027 00:11:17.027 real 0m19.906s 00:11:17.027 user 1m18.755s 00:11:17.027 sys 0m1.423s 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.027 ************************************ 00:11:17.027 END TEST nvmf_filesystem_in_capsule 00:11:17.027 ************************************ 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.027 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.027 rmmod nvme_tcp 00:11:17.027 rmmod nvme_fabrics 00:11:17.027 rmmod nvme_keyring 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.286 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.196 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.196 00:11:19.196 real 0m51.484s 00:11:19.196 user 2m45.185s 00:11:19.196 sys 0m8.764s 00:11:19.196 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:19.196 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.196 ************************************ 00:11:19.196 END TEST nvmf_filesystem 00:11:19.196 ************************************ 00:11:19.196 13:08:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:19.196 13:08:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:19.196 13:08:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:19.196 13:08:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.458 ************************************ 00:11:19.458 START TEST nvmf_target_discovery 00:11:19.458 ************************************ 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:19.458 * Looking for test storage... 00:11:19.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:19.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.458 --rc genhtml_branch_coverage=1 00:11:19.458 --rc genhtml_function_coverage=1 00:11:19.458 --rc genhtml_legend=1 00:11:19.458 --rc geninfo_all_blocks=1 00:11:19.458 --rc geninfo_unexecuted_blocks=1 00:11:19.458 00:11:19.458 ' 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:19.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.458 --rc genhtml_branch_coverage=1 00:11:19.458 --rc genhtml_function_coverage=1 00:11:19.458 --rc genhtml_legend=1 00:11:19.458 --rc geninfo_all_blocks=1 00:11:19.458 --rc geninfo_unexecuted_blocks=1 00:11:19.458 00:11:19.458 ' 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:19.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.458 --rc genhtml_branch_coverage=1 00:11:19.458 --rc genhtml_function_coverage=1 00:11:19.458 --rc genhtml_legend=1 00:11:19.458 --rc geninfo_all_blocks=1 00:11:19.458 --rc geninfo_unexecuted_blocks=1 00:11:19.458 00:11:19.458 ' 00:11:19.458 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:19.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.458 --rc genhtml_branch_coverage=1 00:11:19.458 --rc genhtml_function_coverage=1 00:11:19.458 --rc genhtml_legend=1 00:11:19.458 --rc geninfo_all_blocks=1 00:11:19.458 --rc geninfo_unexecuted_blocks=1 00:11:19.458 00:11:19.458 ' 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.459 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:27.599 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:27.599 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:27.599 Found net devices under 0000:31:00.0: cvl_0_0 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:27.599 Found net devices under 0000:31:00.1: cvl_0_1 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.708 ms 00:11:27.599 00:11:27.599 --- 10.0.0.2 ping statistics --- 00:11:27.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.599 rtt min/avg/max/mdev = 0.708/0.708/0.708/0.000 ms 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:11:27.599 00:11:27.599 --- 10.0.0.1 ping statistics --- 00:11:27.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.599 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.599 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1616285 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1616285 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 1616285 ']' 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:27.599 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:27.599 [2024-11-06 13:08:09.082155] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:11:27.599 [2024-11-06 13:08:09.082227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.599 [2024-11-06 13:08:09.183177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.599 [2024-11-06 13:08:09.237429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.599 [2024-11-06 13:08:09.237487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.599 [2024-11-06 13:08:09.237496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.599 [2024-11-06 13:08:09.237503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.599 [2024-11-06 13:08:09.237510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.599 [2024-11-06 13:08:09.239634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.599 [2024-11-06 13:08:09.239809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.599 [2024-11-06 13:08:09.239916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.599 [2024-11-06 13:08:09.239918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 [2024-11-06 13:08:09.959952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 Null1 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 [2024-11-06 13:08:10.020470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 Null2 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.174 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 Null3 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 Null4 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.435 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:11:28.697 00:11:28.697 Discovery Log Number of Records 6, Generation counter 6 00:11:28.697 =====Discovery Log Entry 0====== 00:11:28.697 trtype: tcp 00:11:28.697 adrfam: ipv4 00:11:28.697 subtype: current discovery subsystem 00:11:28.697 treq: not required 00:11:28.697 portid: 0 00:11:28.697 trsvcid: 4420 00:11:28.697 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.697 traddr: 10.0.0.2 00:11:28.697 eflags: explicit discovery connections, duplicate discovery information 00:11:28.697 sectype: none 00:11:28.697 =====Discovery Log Entry 1====== 00:11:28.697 trtype: tcp 00:11:28.697 adrfam: ipv4 00:11:28.697 subtype: nvme subsystem 00:11:28.697 treq: not required 00:11:28.697 portid: 0 00:11:28.697 trsvcid: 4420 00:11:28.697 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:28.697 traddr: 10.0.0.2 00:11:28.697 eflags: none 00:11:28.697 sectype: none 00:11:28.697 =====Discovery Log Entry 2====== 00:11:28.697 trtype: tcp 00:11:28.697 adrfam: ipv4 00:11:28.697 subtype: nvme subsystem 00:11:28.697 treq: not required 00:11:28.697 portid: 0 00:11:28.697 trsvcid: 4420 00:11:28.697 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:28.697 traddr: 10.0.0.2 00:11:28.697 eflags: none 00:11:28.697 sectype: none 00:11:28.697 =====Discovery Log Entry 3====== 00:11:28.697 trtype: tcp 00:11:28.697 adrfam: ipv4 00:11:28.697 subtype: nvme subsystem 00:11:28.697 treq: not required 00:11:28.697 portid: 0 00:11:28.697 trsvcid: 4420 00:11:28.697 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:28.697 traddr: 10.0.0.2 00:11:28.697 eflags: none 00:11:28.697 sectype: none 00:11:28.697 =====Discovery Log Entry 4====== 00:11:28.697 trtype: tcp 00:11:28.697 adrfam: ipv4 00:11:28.697 subtype: nvme subsystem 00:11:28.697 treq: not required 00:11:28.697 portid: 0 00:11:28.697 trsvcid: 4420 00:11:28.697 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:28.697 traddr: 10.0.0.2 00:11:28.697 eflags: none 00:11:28.697 sectype: none 00:11:28.697 =====Discovery Log Entry 5====== 00:11:28.697 trtype: tcp 00:11:28.697 adrfam: ipv4 00:11:28.697 subtype: discovery subsystem referral 00:11:28.697 treq: not required 00:11:28.697 portid: 0 00:11:28.697 trsvcid: 4430 00:11:28.697 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.697 traddr: 10.0.0.2 00:11:28.697 eflags: none 00:11:28.697 sectype: none 00:11:28.697 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:28.697 Perform nvmf subsystem discovery via RPC 00:11:28.697 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:28.697 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.697 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.697 [ 00:11:28.697 { 00:11:28.697 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:28.697 "subtype": "Discovery", 00:11:28.697 "listen_addresses": [ 00:11:28.697 { 00:11:28.697 "trtype": "TCP", 00:11:28.697 "adrfam": "IPv4", 00:11:28.697 "traddr": "10.0.0.2", 00:11:28.697 "trsvcid": "4420" 00:11:28.697 } 00:11:28.697 ], 00:11:28.697 "allow_any_host": true, 00:11:28.697 "hosts": [] 00:11:28.697 }, 00:11:28.697 { 00:11:28.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.697 "subtype": "NVMe", 00:11:28.697 "listen_addresses": [ 00:11:28.697 { 00:11:28.697 "trtype": "TCP", 00:11:28.697 "adrfam": "IPv4", 00:11:28.697 "traddr": "10.0.0.2", 00:11:28.697 "trsvcid": "4420" 00:11:28.697 } 00:11:28.697 ], 00:11:28.697 "allow_any_host": true, 00:11:28.697 "hosts": [], 00:11:28.697 "serial_number": "SPDK00000000000001", 00:11:28.697 "model_number": "SPDK bdev Controller", 00:11:28.697 "max_namespaces": 32, 00:11:28.697 "min_cntlid": 1, 00:11:28.697 "max_cntlid": 65519, 00:11:28.697 "namespaces": [ 00:11:28.697 { 00:11:28.697 "nsid": 1, 00:11:28.697 "bdev_name": "Null1", 00:11:28.697 "name": "Null1", 00:11:28.697 "nguid": "D5CF8D18DB754D1190F2D2CDCD25C45A", 00:11:28.697 "uuid": "d5cf8d18-db75-4d11-90f2-d2cdcd25c45a" 00:11:28.697 } 00:11:28.697 ] 00:11:28.697 }, 00:11:28.697 { 00:11:28.697 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:28.697 "subtype": "NVMe", 00:11:28.697 "listen_addresses": [ 00:11:28.697 { 00:11:28.697 "trtype": "TCP", 00:11:28.697 "adrfam": "IPv4", 00:11:28.697 "traddr": "10.0.0.2", 00:11:28.697 "trsvcid": "4420" 00:11:28.697 } 00:11:28.697 ], 00:11:28.697 "allow_any_host": true, 00:11:28.697 "hosts": [], 00:11:28.697 "serial_number": "SPDK00000000000002", 00:11:28.697 "model_number": "SPDK bdev Controller", 00:11:28.697 "max_namespaces": 32, 00:11:28.697 "min_cntlid": 1, 00:11:28.697 "max_cntlid": 65519, 00:11:28.697 "namespaces": [ 00:11:28.697 { 00:11:28.697 "nsid": 1, 00:11:28.697 "bdev_name": "Null2", 00:11:28.697 "name": "Null2", 00:11:28.697 "nguid": "6E7FF228515E43CC84025A60B14ECFCF", 00:11:28.697 "uuid": "6e7ff228-515e-43cc-8402-5a60b14ecfcf" 00:11:28.697 } 00:11:28.697 ] 00:11:28.697 }, 00:11:28.697 { 00:11:28.697 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:28.697 "subtype": "NVMe", 00:11:28.697 "listen_addresses": [ 00:11:28.697 { 00:11:28.697 "trtype": "TCP", 00:11:28.697 "adrfam": "IPv4", 00:11:28.697 "traddr": "10.0.0.2", 00:11:28.697 "trsvcid": "4420" 00:11:28.697 } 00:11:28.697 ], 00:11:28.697 "allow_any_host": true, 00:11:28.697 "hosts": [], 00:11:28.697 "serial_number": "SPDK00000000000003", 00:11:28.697 "model_number": "SPDK bdev Controller", 00:11:28.697 "max_namespaces": 32, 00:11:28.697 "min_cntlid": 1, 00:11:28.697 "max_cntlid": 65519, 00:11:28.697 "namespaces": [ 00:11:28.697 { 00:11:28.697 "nsid": 1, 00:11:28.697 "bdev_name": "Null3", 00:11:28.697 "name": "Null3", 00:11:28.697 "nguid": "008193027D544D8D825518D94A7E0D56", 00:11:28.697 "uuid": "00819302-7d54-4d8d-8255-18d94a7e0d56" 00:11:28.697 } 00:11:28.697 ] 00:11:28.697 }, 00:11:28.697 { 00:11:28.697 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:28.697 "subtype": "NVMe", 00:11:28.697 "listen_addresses": [ 00:11:28.697 { 00:11:28.697 "trtype": "TCP", 00:11:28.697 "adrfam": "IPv4", 00:11:28.697 "traddr": "10.0.0.2", 00:11:28.697 "trsvcid": "4420" 00:11:28.697 } 00:11:28.697 ], 00:11:28.697 "allow_any_host": true, 00:11:28.697 "hosts": [], 00:11:28.697 "serial_number": "SPDK00000000000004", 00:11:28.697 "model_number": "SPDK bdev Controller", 00:11:28.697 "max_namespaces": 32, 00:11:28.697 "min_cntlid": 1, 00:11:28.697 "max_cntlid": 65519, 00:11:28.697 "namespaces": [ 00:11:28.697 { 00:11:28.697 "nsid": 1, 00:11:28.697 "bdev_name": "Null4", 00:11:28.697 "name": "Null4", 00:11:28.697 "nguid": "CDD45377368140A7BAAE6516BC68DD2D", 00:11:28.698 "uuid": "cdd45377-3681-40a7-baae-6516bc68dd2d" 00:11:28.698 } 00:11:28.698 ] 00:11:28.698 } 00:11:28.698 ] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:28.960 rmmod nvme_tcp 00:11:28.960 rmmod nvme_fabrics 00:11:28.960 rmmod nvme_keyring 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1616285 ']' 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1616285 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 1616285 ']' 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 1616285 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1616285 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1616285' 00:11:28.960 killing process with pid 1616285 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 1616285 00:11:28.960 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 1616285 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.225 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.144 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.144 00:11:31.144 real 0m11.904s 00:11:31.144 user 0m9.165s 00:11:31.144 sys 0m6.216s 00:11:31.144 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.144 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:31.144 ************************************ 00:11:31.144 END TEST nvmf_target_discovery 00:11:31.144 ************************************ 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.406 ************************************ 00:11:31.406 START TEST nvmf_referrals 00:11:31.406 ************************************ 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:31.406 * Looking for test storage... 00:11:31.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:31.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.406 --rc genhtml_branch_coverage=1 00:11:31.406 --rc genhtml_function_coverage=1 00:11:31.406 --rc genhtml_legend=1 00:11:31.406 --rc geninfo_all_blocks=1 00:11:31.406 --rc geninfo_unexecuted_blocks=1 00:11:31.406 00:11:31.406 ' 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:31.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.406 --rc genhtml_branch_coverage=1 00:11:31.406 --rc genhtml_function_coverage=1 00:11:31.406 --rc genhtml_legend=1 00:11:31.406 --rc geninfo_all_blocks=1 00:11:31.406 --rc geninfo_unexecuted_blocks=1 00:11:31.406 00:11:31.406 ' 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:31.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.406 --rc genhtml_branch_coverage=1 00:11:31.406 --rc genhtml_function_coverage=1 00:11:31.406 --rc genhtml_legend=1 00:11:31.406 --rc geninfo_all_blocks=1 00:11:31.406 --rc geninfo_unexecuted_blocks=1 00:11:31.406 00:11:31.406 ' 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:31.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.406 --rc genhtml_branch_coverage=1 00:11:31.406 --rc genhtml_function_coverage=1 00:11:31.406 --rc genhtml_legend=1 00:11:31.406 --rc geninfo_all_blocks=1 00:11:31.406 --rc geninfo_unexecuted_blocks=1 00:11:31.406 00:11:31.406 ' 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.406 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:31.668 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.669 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.814 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:39.815 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:39.815 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:39.815 Found net devices under 0000:31:00.0: cvl_0_0 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:39.815 Found net devices under 0000:31:00.1: cvl_0_1 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:11:39.815 00:11:39.815 --- 10.0.0.2 ping statistics --- 00:11:39.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.815 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:11:39.815 00:11:39.815 --- 10.0.0.1 ping statistics --- 00:11:39.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.815 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1620805 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1620805 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 1620805 ']' 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:39.815 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:39.815 [2024-11-06 13:08:21.033849] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:11:39.815 [2024-11-06 13:08:21.033921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.815 [2024-11-06 13:08:21.135709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.815 [2024-11-06 13:08:21.189478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.815 [2024-11-06 13:08:21.189532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.815 [2024-11-06 13:08:21.189541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.816 [2024-11-06 13:08:21.189549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.816 [2024-11-06 13:08:21.189556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.816 [2024-11-06 13:08:21.191659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.816 [2024-11-06 13:08:21.191822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.816 [2024-11-06 13:08:21.191891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.816 [2024-11-06 13:08:21.191896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.077 [2024-11-06 13:08:21.908789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.077 [2024-11-06 13:08:21.925157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.077 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.339 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.339 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.601 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:40.863 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:41.125 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:41.125 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:41.125 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:41.125 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:41.125 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.125 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:41.387 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:41.647 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:41.647 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:41.647 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:41.647 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:41.647 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:41.648 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.648 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:41.909 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:42.170 13:08:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.431 rmmod nvme_tcp 00:11:42.431 rmmod nvme_fabrics 00:11:42.431 rmmod nvme_keyring 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1620805 ']' 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1620805 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 1620805 ']' 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 1620805 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1620805 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1620805' 00:11:42.431 killing process with pid 1620805 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 1620805 00:11:42.431 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 1620805 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.693 13:08:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.606 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.606 00:11:44.606 real 0m13.327s 00:11:44.606 user 0m15.731s 00:11:44.606 sys 0m6.633s 00:11:44.606 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.606 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:44.606 ************************************ 00:11:44.606 END TEST nvmf_referrals 00:11:44.606 ************************************ 00:11:44.606 13:08:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:44.606 13:08:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:44.606 13:08:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.606 13:08:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.606 ************************************ 00:11:44.606 START TEST nvmf_connect_disconnect 00:11:44.606 ************************************ 00:11:44.606 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:44.867 * Looking for test storage... 00:11:44.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.867 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:44.867 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:44.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.868 --rc genhtml_branch_coverage=1 00:11:44.868 --rc genhtml_function_coverage=1 00:11:44.868 --rc genhtml_legend=1 00:11:44.868 --rc geninfo_all_blocks=1 00:11:44.868 --rc geninfo_unexecuted_blocks=1 00:11:44.868 00:11:44.868 ' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:44.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.868 --rc genhtml_branch_coverage=1 00:11:44.868 --rc genhtml_function_coverage=1 00:11:44.868 --rc genhtml_legend=1 00:11:44.868 --rc geninfo_all_blocks=1 00:11:44.868 --rc geninfo_unexecuted_blocks=1 00:11:44.868 00:11:44.868 ' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:44.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.868 --rc genhtml_branch_coverage=1 00:11:44.868 --rc genhtml_function_coverage=1 00:11:44.868 --rc genhtml_legend=1 00:11:44.868 --rc geninfo_all_blocks=1 00:11:44.868 --rc geninfo_unexecuted_blocks=1 00:11:44.868 00:11:44.868 ' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:44.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.868 --rc genhtml_branch_coverage=1 00:11:44.868 --rc genhtml_function_coverage=1 00:11:44.868 --rc genhtml_legend=1 00:11:44.868 --rc geninfo_all_blocks=1 00:11:44.868 --rc geninfo_unexecuted_blocks=1 00:11:44.868 00:11:44.868 ' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.868 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.869 13:08:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.013 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:53.014 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:53.014 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:53.014 Found net devices under 0000:31:00.0: cvl_0_0 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:53.014 Found net devices under 0000:31:00.1: cvl_0_1 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.014 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:11:53.014 00:11:53.014 --- 10.0.0.2 ping statistics --- 00:11:53.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.014 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:11:53.014 00:11:53.014 --- 10.0.0.1 ping statistics --- 00:11:53.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.014 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.014 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.015 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1625823 00:11:53.015 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1625823 00:11:53.015 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.015 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 1625823 ']' 00:11:53.015 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.015 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.015 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.015 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.015 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.015 [2024-11-06 13:08:34.415610] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:11:53.015 [2024-11-06 13:08:34.415675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.015 [2024-11-06 13:08:34.514926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.015 [2024-11-06 13:08:34.567797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.015 [2024-11-06 13:08:34.567847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.015 [2024-11-06 13:08:34.567856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.015 [2024-11-06 13:08:34.567868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.015 [2024-11-06 13:08:34.567874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.015 [2024-11-06 13:08:34.570292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.015 [2024-11-06 13:08:34.570453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.015 [2024-11-06 13:08:34.570612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.015 [2024-11-06 13:08:34.570612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.587 [2024-11-06 13:08:35.318612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.587 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.588 [2024-11-06 13:08:35.399313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:53.588 13:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:57.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.907 rmmod nvme_tcp 00:12:11.907 rmmod nvme_fabrics 00:12:11.907 rmmod nvme_keyring 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1625823 ']' 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1625823 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1625823 ']' 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 1625823 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:11.907 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1625823 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1625823' 00:12:12.168 killing process with pid 1625823 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 1625823 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 1625823 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.168 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.717 00:12:14.717 real 0m29.556s 00:12:14.717 user 1m19.516s 00:12:14.717 sys 0m7.174s 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:14.717 ************************************ 00:12:14.717 END TEST nvmf_connect_disconnect 00:12:14.717 ************************************ 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.717 ************************************ 00:12:14.717 START TEST nvmf_multitarget 00:12:14.717 ************************************ 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:14.717 * Looking for test storage... 00:12:14.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:14.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.717 --rc genhtml_branch_coverage=1 00:12:14.717 --rc genhtml_function_coverage=1 00:12:14.717 --rc genhtml_legend=1 00:12:14.717 --rc geninfo_all_blocks=1 00:12:14.717 --rc geninfo_unexecuted_blocks=1 00:12:14.717 00:12:14.717 ' 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:14.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.717 --rc genhtml_branch_coverage=1 00:12:14.717 --rc genhtml_function_coverage=1 00:12:14.717 --rc genhtml_legend=1 00:12:14.717 --rc geninfo_all_blocks=1 00:12:14.717 --rc geninfo_unexecuted_blocks=1 00:12:14.717 00:12:14.717 ' 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:14.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.717 --rc genhtml_branch_coverage=1 00:12:14.717 --rc genhtml_function_coverage=1 00:12:14.717 --rc genhtml_legend=1 00:12:14.717 --rc geninfo_all_blocks=1 00:12:14.717 --rc geninfo_unexecuted_blocks=1 00:12:14.717 00:12:14.717 ' 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:14.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.717 --rc genhtml_branch_coverage=1 00:12:14.717 --rc genhtml_function_coverage=1 00:12:14.717 --rc genhtml_legend=1 00:12:14.717 --rc geninfo_all_blocks=1 00:12:14.717 --rc geninfo_unexecuted_blocks=1 00:12:14.717 00:12:14.717 ' 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.717 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.718 13:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:22.948 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:22.948 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:22.948 Found net devices under 0000:31:00.0: cvl_0_0 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.948 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:22.949 Found net devices under 0000:31:00.1: cvl_0_1 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:12:22.949 00:12:22.949 --- 10.0.0.2 ping statistics --- 00:12:22.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.949 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:12:22.949 00:12:22.949 --- 10.0.0.1 ping statistics --- 00:12:22.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.949 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:22.949 13:09:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1634083 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1634083 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 1634083 ']' 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:22.949 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:22.949 [2024-11-06 13:09:04.097758] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:12:22.949 [2024-11-06 13:09:04.097821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.949 [2024-11-06 13:09:04.200380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.949 [2024-11-06 13:09:04.254520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.949 [2024-11-06 13:09:04.254576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.949 [2024-11-06 13:09:04.254585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.949 [2024-11-06 13:09:04.254592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.949 [2024-11-06 13:09:04.254599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.949 [2024-11-06 13:09:04.256912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.949 [2024-11-06 13:09:04.257071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.949 [2024-11-06 13:09:04.257229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.949 [2024-11-06 13:09:04.257230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.211 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:23.211 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:23.211 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:23.211 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:23.211 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:23.211 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.211 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:23.211 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.211 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:23.211 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:23.211 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:23.472 "nvmf_tgt_1" 00:12:23.472 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:23.472 "nvmf_tgt_2" 00:12:23.472 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.472 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:23.733 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:23.733 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:23.733 true 00:12:23.733 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:23.994 true 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:23.994 rmmod nvme_tcp 00:12:23.994 rmmod nvme_fabrics 00:12:23.994 rmmod nvme_keyring 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1634083 ']' 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1634083 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 1634083 ']' 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 1634083 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:23.994 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1634083 00:12:24.255 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:24.255 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:24.255 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1634083' 00:12:24.255 killing process with pid 1634083 00:12:24.255 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 1634083 00:12:24.255 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 1634083 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.255 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:26.804 00:12:26.804 real 0m12.024s 00:12:26.804 user 0m10.300s 00:12:26.804 sys 0m6.308s 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.804 ************************************ 00:12:26.804 END TEST nvmf_multitarget 00:12:26.804 ************************************ 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.804 ************************************ 00:12:26.804 START TEST nvmf_rpc 00:12:26.804 ************************************ 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:26.804 * Looking for test storage... 00:12:26.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:26.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.804 --rc genhtml_branch_coverage=1 00:12:26.804 --rc genhtml_function_coverage=1 00:12:26.804 --rc genhtml_legend=1 00:12:26.804 --rc geninfo_all_blocks=1 00:12:26.804 --rc geninfo_unexecuted_blocks=1 00:12:26.804 00:12:26.804 ' 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:26.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.804 --rc genhtml_branch_coverage=1 00:12:26.804 --rc genhtml_function_coverage=1 00:12:26.804 --rc genhtml_legend=1 00:12:26.804 --rc geninfo_all_blocks=1 00:12:26.804 --rc geninfo_unexecuted_blocks=1 00:12:26.804 00:12:26.804 ' 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:26.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.804 --rc genhtml_branch_coverage=1 00:12:26.804 --rc genhtml_function_coverage=1 00:12:26.804 --rc genhtml_legend=1 00:12:26.804 --rc geninfo_all_blocks=1 00:12:26.804 --rc geninfo_unexecuted_blocks=1 00:12:26.804 00:12:26.804 ' 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:26.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.804 --rc genhtml_branch_coverage=1 00:12:26.804 --rc genhtml_function_coverage=1 00:12:26.804 --rc genhtml_legend=1 00:12:26.804 --rc geninfo_all_blocks=1 00:12:26.804 --rc geninfo_unexecuted_blocks=1 00:12:26.804 00:12:26.804 ' 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.804 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:26.805 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:34.951 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:34.951 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.951 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:34.952 Found net devices under 0000:31:00.0: cvl_0_0 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:34.952 Found net devices under 0000:31:00.1: cvl_0_1 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.952 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:34.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:12:34.952 00:12:34.952 --- 10.0.0.2 ping statistics --- 00:12:34.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.952 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:12:34.952 00:12:34.952 --- 10.0.0.1 ping statistics --- 00:12:34.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.952 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1639281 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1639281 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 1639281 ']' 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:34.952 13:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.952 [2024-11-06 13:09:16.278384] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:12:34.952 [2024-11-06 13:09:16.278453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.952 [2024-11-06 13:09:16.378036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.952 [2024-11-06 13:09:16.431056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.952 [2024-11-06 13:09:16.431107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.952 [2024-11-06 13:09:16.431116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.952 [2024-11-06 13:09:16.431124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.952 [2024-11-06 13:09:16.431130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.952 [2024-11-06 13:09:16.433169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.952 [2024-11-06 13:09:16.433330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.952 [2024-11-06 13:09:16.433489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.952 [2024-11-06 13:09:16.433489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.214 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:35.214 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:35.214 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.214 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.214 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:35.475 "tick_rate": 2400000000, 00:12:35.475 "poll_groups": [ 00:12:35.475 { 00:12:35.475 "name": "nvmf_tgt_poll_group_000", 00:12:35.475 "admin_qpairs": 0, 00:12:35.475 "io_qpairs": 0, 00:12:35.475 "current_admin_qpairs": 0, 00:12:35.475 "current_io_qpairs": 0, 00:12:35.475 "pending_bdev_io": 0, 00:12:35.475 "completed_nvme_io": 0, 00:12:35.475 "transports": [] 00:12:35.475 }, 00:12:35.475 { 00:12:35.475 "name": "nvmf_tgt_poll_group_001", 00:12:35.475 "admin_qpairs": 0, 00:12:35.475 "io_qpairs": 0, 00:12:35.475 "current_admin_qpairs": 0, 00:12:35.475 "current_io_qpairs": 0, 00:12:35.475 "pending_bdev_io": 0, 00:12:35.475 "completed_nvme_io": 0, 00:12:35.475 "transports": [] 00:12:35.475 }, 00:12:35.475 { 00:12:35.475 "name": "nvmf_tgt_poll_group_002", 00:12:35.475 "admin_qpairs": 0, 00:12:35.475 "io_qpairs": 0, 00:12:35.475 "current_admin_qpairs": 0, 00:12:35.475 "current_io_qpairs": 0, 00:12:35.475 "pending_bdev_io": 0, 00:12:35.475 "completed_nvme_io": 0, 00:12:35.475 "transports": [] 00:12:35.475 }, 00:12:35.475 { 00:12:35.475 "name": "nvmf_tgt_poll_group_003", 00:12:35.475 "admin_qpairs": 0, 00:12:35.475 "io_qpairs": 0, 00:12:35.475 "current_admin_qpairs": 0, 00:12:35.475 "current_io_qpairs": 0, 00:12:35.475 "pending_bdev_io": 0, 00:12:35.475 "completed_nvme_io": 0, 00:12:35.475 "transports": [] 00:12:35.475 } 00:12:35.475 ] 00:12:35.475 }' 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.475 [2024-11-06 13:09:17.273922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:35.475 "tick_rate": 2400000000, 00:12:35.475 "poll_groups": [ 00:12:35.475 { 00:12:35.475 "name": "nvmf_tgt_poll_group_000", 00:12:35.475 "admin_qpairs": 0, 00:12:35.475 "io_qpairs": 0, 00:12:35.475 "current_admin_qpairs": 0, 00:12:35.475 "current_io_qpairs": 0, 00:12:35.475 "pending_bdev_io": 0, 00:12:35.475 "completed_nvme_io": 0, 00:12:35.475 "transports": [ 00:12:35.475 { 00:12:35.475 "trtype": "TCP" 00:12:35.475 } 00:12:35.475 ] 00:12:35.475 }, 00:12:35.475 { 00:12:35.475 "name": "nvmf_tgt_poll_group_001", 00:12:35.475 "admin_qpairs": 0, 00:12:35.475 "io_qpairs": 0, 00:12:35.475 "current_admin_qpairs": 0, 00:12:35.475 "current_io_qpairs": 0, 00:12:35.475 "pending_bdev_io": 0, 00:12:35.475 "completed_nvme_io": 0, 00:12:35.475 "transports": [ 00:12:35.475 { 00:12:35.475 "trtype": "TCP" 00:12:35.475 } 00:12:35.475 ] 00:12:35.475 }, 00:12:35.475 { 00:12:35.475 "name": "nvmf_tgt_poll_group_002", 00:12:35.475 "admin_qpairs": 0, 00:12:35.475 "io_qpairs": 0, 00:12:35.475 "current_admin_qpairs": 0, 00:12:35.475 "current_io_qpairs": 0, 00:12:35.475 "pending_bdev_io": 0, 00:12:35.475 "completed_nvme_io": 0, 00:12:35.475 "transports": [ 00:12:35.475 { 00:12:35.475 "trtype": "TCP" 00:12:35.475 } 00:12:35.475 ] 00:12:35.475 }, 00:12:35.475 { 00:12:35.475 "name": "nvmf_tgt_poll_group_003", 00:12:35.475 "admin_qpairs": 0, 00:12:35.475 "io_qpairs": 0, 00:12:35.475 "current_admin_qpairs": 0, 00:12:35.475 "current_io_qpairs": 0, 00:12:35.475 "pending_bdev_io": 0, 00:12:35.475 "completed_nvme_io": 0, 00:12:35.475 "transports": [ 00:12:35.475 { 00:12:35.475 "trtype": "TCP" 00:12:35.475 } 00:12:35.475 ] 00:12:35.475 } 00:12:35.475 ] 00:12:35.475 }' 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:35.475 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:35.476 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:35.476 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.476 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:35.476 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:35.476 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:35.476 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:35.476 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.737 Malloc1 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.737 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.738 [2024-11-06 13:09:17.488591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:35.738 [2024-11-06 13:09:17.525629] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:35.738 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:35.738 could not add new controller: failed to write to nvme-fabrics device 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.738 13:09:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.652 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.652 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:37.652 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.652 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:37.652 13:09:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.564 [2024-11-06 13:09:21.331613] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:39.564 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:39.564 could not add new controller: failed to write to nvme-fabrics device 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.564 13:09:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.479 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.479 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:41.479 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.479 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:41.479 13:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:43.393 13:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.393 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:43.393 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.394 [2024-11-06 13:09:25.055832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.394 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.778 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.778 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:44.778 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.778 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:44.778 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:46.692 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:46.692 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:46.692 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.692 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:46.692 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.692 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:46.692 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.952 [2024-11-06 13:09:28.769602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.952 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.867 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.867 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:48.867 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.867 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:48.867 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.779 [2024-11-06 13:09:32.436643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.779 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.162 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.162 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:52.162 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.162 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:52.162 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:54.074 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:54.074 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:54.074 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.074 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:54.074 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.074 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:54.074 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.334 [2024-11-06 13:09:36.147542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.334 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.250 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.250 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:56.250 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.250 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:56.250 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:58.164 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:58.164 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:58.164 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.165 [2024-11-06 13:09:39.894554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.165 13:09:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.553 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.553 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:59.553 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.553 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:59.553 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 [2024-11-06 13:09:43.609780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 [2024-11-06 13:09:43.673919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.100 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 [2024-11-06 13:09:43.742108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 [2024-11-06 13:09:43.814348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 [2024-11-06 13:09:43.878552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:02.101 "tick_rate": 2400000000, 00:13:02.101 "poll_groups": [ 00:13:02.101 { 00:13:02.101 "name": "nvmf_tgt_poll_group_000", 00:13:02.101 "admin_qpairs": 0, 00:13:02.101 "io_qpairs": 224, 00:13:02.101 "current_admin_qpairs": 0, 00:13:02.101 "current_io_qpairs": 0, 00:13:02.101 "pending_bdev_io": 0, 00:13:02.101 "completed_nvme_io": 283, 00:13:02.101 "transports": [ 00:13:02.101 { 00:13:02.101 "trtype": "TCP" 00:13:02.101 } 00:13:02.101 ] 00:13:02.101 }, 00:13:02.101 { 00:13:02.101 "name": "nvmf_tgt_poll_group_001", 00:13:02.101 "admin_qpairs": 1, 00:13:02.101 "io_qpairs": 223, 00:13:02.101 "current_admin_qpairs": 0, 00:13:02.101 "current_io_qpairs": 0, 00:13:02.101 "pending_bdev_io": 0, 00:13:02.101 "completed_nvme_io": 421, 00:13:02.101 "transports": [ 00:13:02.101 { 00:13:02.101 "trtype": "TCP" 00:13:02.101 } 00:13:02.101 ] 00:13:02.101 }, 00:13:02.101 { 00:13:02.101 "name": "nvmf_tgt_poll_group_002", 00:13:02.101 "admin_qpairs": 6, 00:13:02.101 "io_qpairs": 218, 00:13:02.101 "current_admin_qpairs": 0, 00:13:02.101 "current_io_qpairs": 0, 00:13:02.101 "pending_bdev_io": 0, 00:13:02.101 "completed_nvme_io": 273, 00:13:02.101 "transports": [ 00:13:02.101 { 00:13:02.101 "trtype": "TCP" 00:13:02.101 } 00:13:02.101 ] 00:13:02.101 }, 00:13:02.101 { 00:13:02.101 "name": "nvmf_tgt_poll_group_003", 00:13:02.101 "admin_qpairs": 0, 00:13:02.101 "io_qpairs": 224, 00:13:02.101 "current_admin_qpairs": 0, 00:13:02.101 "current_io_qpairs": 0, 00:13:02.101 "pending_bdev_io": 0, 00:13:02.101 "completed_nvme_io": 262, 00:13:02.101 "transports": [ 00:13:02.101 { 00:13:02.101 "trtype": "TCP" 00:13:02.101 } 00:13:02.101 ] 00:13:02.101 } 00:13:02.101 ] 00:13:02.101 }' 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:02.101 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:02.362 rmmod nvme_tcp 00:13:02.362 rmmod nvme_fabrics 00:13:02.362 rmmod nvme_keyring 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1639281 ']' 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1639281 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 1639281 ']' 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 1639281 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:02.362 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1639281 00:13:02.363 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:02.363 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:02.363 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1639281' 00:13:02.363 killing process with pid 1639281 00:13:02.363 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 1639281 00:13:02.363 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 1639281 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.623 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.536 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:04.536 00:13:04.536 real 0m38.126s 00:13:04.536 user 1m53.366s 00:13:04.536 sys 0m8.103s 00:13:04.536 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:04.536 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.536 ************************************ 00:13:04.536 END TEST nvmf_rpc 00:13:04.536 ************************************ 00:13:04.536 13:09:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:04.536 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:04.536 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:04.536 13:09:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.798 ************************************ 00:13:04.798 START TEST nvmf_invalid 00:13:04.798 ************************************ 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:04.798 * Looking for test storage... 00:13:04.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:04.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.798 --rc genhtml_branch_coverage=1 00:13:04.798 --rc genhtml_function_coverage=1 00:13:04.798 --rc genhtml_legend=1 00:13:04.798 --rc geninfo_all_blocks=1 00:13:04.798 --rc geninfo_unexecuted_blocks=1 00:13:04.798 00:13:04.798 ' 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:04.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.798 --rc genhtml_branch_coverage=1 00:13:04.798 --rc genhtml_function_coverage=1 00:13:04.798 --rc genhtml_legend=1 00:13:04.798 --rc geninfo_all_blocks=1 00:13:04.798 --rc geninfo_unexecuted_blocks=1 00:13:04.798 00:13:04.798 ' 00:13:04.798 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:04.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.798 --rc genhtml_branch_coverage=1 00:13:04.799 --rc genhtml_function_coverage=1 00:13:04.799 --rc genhtml_legend=1 00:13:04.799 --rc geninfo_all_blocks=1 00:13:04.799 --rc geninfo_unexecuted_blocks=1 00:13:04.799 00:13:04.799 ' 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:04.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.799 --rc genhtml_branch_coverage=1 00:13:04.799 --rc genhtml_function_coverage=1 00:13:04.799 --rc genhtml_legend=1 00:13:04.799 --rc geninfo_all_blocks=1 00:13:04.799 --rc geninfo_unexecuted_blocks=1 00:13:04.799 00:13:04.799 ' 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:04.799 13:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:12.941 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:12.941 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.941 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:12.942 Found net devices under 0000:31:00.0: cvl_0_0 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:12.942 Found net devices under 0000:31:00.1: cvl_0_1 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.942 13:09:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:12.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:13:12.942 00:13:12.942 --- 10.0.0.2 ping statistics --- 00:13:12.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.942 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:13:12.942 00:13:12.942 --- 10.0.0.1 ping statistics --- 00:13:12.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.942 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1648918 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1648918 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 1648918 ']' 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:12.942 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:12.942 [2024-11-06 13:09:54.388929] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:13:12.942 [2024-11-06 13:09:54.389001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.942 [2024-11-06 13:09:54.488923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.942 [2024-11-06 13:09:54.543336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.942 [2024-11-06 13:09:54.543389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.942 [2024-11-06 13:09:54.543398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.942 [2024-11-06 13:09:54.543406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.942 [2024-11-06 13:09:54.543412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.942 [2024-11-06 13:09:54.545483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.942 [2024-11-06 13:09:54.545639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.942 [2024-11-06 13:09:54.545808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.942 [2024-11-06 13:09:54.545809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.514 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:13.514 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:13.514 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:13.514 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:13.515 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.515 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.515 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:13.515 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20155 00:13:13.775 [2024-11-06 13:09:55.435010] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:13.775 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:13.775 { 00:13:13.775 "nqn": "nqn.2016-06.io.spdk:cnode20155", 00:13:13.775 "tgt_name": "foobar", 00:13:13.775 "method": "nvmf_create_subsystem", 00:13:13.775 "req_id": 1 00:13:13.775 } 00:13:13.775 Got JSON-RPC error response 00:13:13.775 response: 00:13:13.775 { 00:13:13.775 "code": -32603, 00:13:13.775 "message": "Unable to find target foobar" 00:13:13.775 }' 00:13:13.775 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:13.775 { 00:13:13.775 "nqn": "nqn.2016-06.io.spdk:cnode20155", 00:13:13.775 "tgt_name": "foobar", 00:13:13.775 "method": "nvmf_create_subsystem", 00:13:13.775 "req_id": 1 00:13:13.775 } 00:13:13.775 Got JSON-RPC error response 00:13:13.775 response: 00:13:13.775 { 00:13:13.775 "code": -32603, 00:13:13.775 "message": "Unable to find target foobar" 00:13:13.775 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:13.775 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:13.775 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5521 00:13:13.775 [2024-11-06 13:09:55.643819] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5521: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:14.037 { 00:13:14.037 "nqn": "nqn.2016-06.io.spdk:cnode5521", 00:13:14.037 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:14.037 "method": "nvmf_create_subsystem", 00:13:14.037 "req_id": 1 00:13:14.037 } 00:13:14.037 Got JSON-RPC error response 00:13:14.037 response: 00:13:14.037 { 00:13:14.037 "code": -32602, 00:13:14.037 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:14.037 }' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:14.037 { 00:13:14.037 "nqn": "nqn.2016-06.io.spdk:cnode5521", 00:13:14.037 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:14.037 "method": "nvmf_create_subsystem", 00:13:14.037 "req_id": 1 00:13:14.037 } 00:13:14.037 Got JSON-RPC error response 00:13:14.037 response: 00:13:14.037 { 00:13:14.037 "code": -32602, 00:13:14.037 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:14.037 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20017 00:13:14.037 [2024-11-06 13:09:55.852537] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20017: invalid model number 'SPDK_Controller' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:14.037 { 00:13:14.037 "nqn": "nqn.2016-06.io.spdk:cnode20017", 00:13:14.037 "model_number": "SPDK_Controller\u001f", 00:13:14.037 "method": "nvmf_create_subsystem", 00:13:14.037 "req_id": 1 00:13:14.037 } 00:13:14.037 Got JSON-RPC error response 00:13:14.037 response: 00:13:14.037 { 00:13:14.037 "code": -32602, 00:13:14.037 "message": "Invalid MN SPDK_Controller\u001f" 00:13:14.037 }' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:14.037 { 00:13:14.037 "nqn": "nqn.2016-06.io.spdk:cnode20017", 00:13:14.037 "model_number": "SPDK_Controller\u001f", 00:13:14.037 "method": "nvmf_create_subsystem", 00:13:14.037 "req_id": 1 00:13:14.037 } 00:13:14.037 Got JSON-RPC error response 00:13:14.037 response: 00:13:14.037 { 00:13:14.037 "code": -32602, 00:13:14.037 "message": "Invalid MN SPDK_Controller\u001f" 00:13:14.037 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.037 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:14.300 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:14.301 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:14.301 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.301 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.301 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:14.301 13:09:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 3 == \- ]] 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '3d*]Fy[v,P@)9C{COdWH~' 00:13:14.301 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '3d*]Fy[v,P@)9C{COdWH~' nqn.2016-06.io.spdk:cnode30527 00:13:14.562 [2024-11-06 13:09:56.229964] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30527: invalid serial number '3d*]Fy[v,P@)9C{COdWH~' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:14.562 { 00:13:14.562 "nqn": "nqn.2016-06.io.spdk:cnode30527", 00:13:14.562 "serial_number": "3d*]Fy[v,P@)9C{COdWH~", 00:13:14.562 "method": "nvmf_create_subsystem", 00:13:14.562 "req_id": 1 00:13:14.562 } 00:13:14.562 Got JSON-RPC error response 00:13:14.562 response: 00:13:14.562 { 00:13:14.562 "code": -32602, 00:13:14.562 "message": "Invalid SN 3d*]Fy[v,P@)9C{COdWH~" 00:13:14.562 }' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:14.562 { 00:13:14.562 "nqn": "nqn.2016-06.io.spdk:cnode30527", 00:13:14.562 "serial_number": "3d*]Fy[v,P@)9C{COdWH~", 00:13:14.562 "method": "nvmf_create_subsystem", 00:13:14.562 "req_id": 1 00:13:14.562 } 00:13:14.562 Got JSON-RPC error response 00:13:14.562 response: 00:13:14.562 { 00:13:14.562 "code": -32602, 00:13:14.562 "message": "Invalid SN 3d*]Fy[v,P@)9C{COdWH~" 00:13:14.562 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:14.562 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.563 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 7 == \- ]] 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '7pZMlq\P*sp]5]^ld`'\'':B-nH}\F`D:(%DqK\DAS)#' 00:13:14.824 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '7pZMlq\P*sp]5]^ld`'\'':B-nH}\F`D:(%DqK\DAS)#' nqn.2016-06.io.spdk:cnode1020 00:13:15.085 [2024-11-06 13:09:56.768081] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1020: invalid model number '7pZMlq\P*sp]5]^ld`':B-nH}\F`D:(%DqK\DAS)#' 00:13:15.085 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:15.085 { 00:13:15.085 "nqn": "nqn.2016-06.io.spdk:cnode1020", 00:13:15.085 "model_number": "7pZMlq\\P*sp]5]^ld`'\'':B-nH}\\F`D:(%DqK\\DAS)#", 00:13:15.085 "method": "nvmf_create_subsystem", 00:13:15.085 "req_id": 1 00:13:15.085 } 00:13:15.085 Got JSON-RPC error response 00:13:15.085 response: 00:13:15.085 { 00:13:15.085 "code": -32602, 00:13:15.085 "message": "Invalid MN 7pZMlq\\P*sp]5]^ld`'\'':B-nH}\\F`D:(%DqK\\DAS)#" 00:13:15.085 }' 00:13:15.085 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:15.085 { 00:13:15.085 "nqn": "nqn.2016-06.io.spdk:cnode1020", 00:13:15.085 "model_number": "7pZMlq\\P*sp]5]^ld`':B-nH}\\F`D:(%DqK\\DAS)#", 00:13:15.085 "method": "nvmf_create_subsystem", 00:13:15.085 "req_id": 1 00:13:15.085 } 00:13:15.085 Got JSON-RPC error response 00:13:15.085 response: 00:13:15.085 { 00:13:15.085 "code": -32602, 00:13:15.085 "message": "Invalid MN 7pZMlq\\P*sp]5]^ld`':B-nH}\\F`D:(%DqK\\DAS)#" 00:13:15.085 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:15.085 13:09:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:15.085 [2024-11-06 13:09:56.968927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.345 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:15.345 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:15.345 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:15.345 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:15.345 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:15.345 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:15.606 [2024-11-06 13:09:57.386492] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:15.606 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:15.606 { 00:13:15.606 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:15.606 "listen_address": { 00:13:15.606 "trtype": "tcp", 00:13:15.606 "traddr": "", 00:13:15.606 "trsvcid": "4421" 00:13:15.606 }, 00:13:15.606 "method": "nvmf_subsystem_remove_listener", 00:13:15.606 "req_id": 1 00:13:15.606 } 00:13:15.606 Got JSON-RPC error response 00:13:15.606 response: 00:13:15.606 { 00:13:15.606 "code": -32602, 00:13:15.606 "message": "Invalid parameters" 00:13:15.606 }' 00:13:15.606 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:15.606 { 00:13:15.606 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:15.606 "listen_address": { 00:13:15.606 "trtype": "tcp", 00:13:15.606 "traddr": "", 00:13:15.606 "trsvcid": "4421" 00:13:15.606 }, 00:13:15.606 "method": "nvmf_subsystem_remove_listener", 00:13:15.606 "req_id": 1 00:13:15.606 } 00:13:15.606 Got JSON-RPC error response 00:13:15.606 response: 00:13:15.606 { 00:13:15.606 "code": -32602, 00:13:15.606 "message": "Invalid parameters" 00:13:15.606 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:15.606 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30587 -i 0 00:13:15.867 [2024-11-06 13:09:57.583103] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30587: invalid cntlid range [0-65519] 00:13:15.867 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:15.867 { 00:13:15.867 "nqn": "nqn.2016-06.io.spdk:cnode30587", 00:13:15.867 "min_cntlid": 0, 00:13:15.867 "method": "nvmf_create_subsystem", 00:13:15.867 "req_id": 1 00:13:15.867 } 00:13:15.867 Got JSON-RPC error response 00:13:15.867 response: 00:13:15.867 { 00:13:15.867 "code": -32602, 00:13:15.867 "message": "Invalid cntlid range [0-65519]" 00:13:15.867 }' 00:13:15.867 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:15.867 { 00:13:15.867 "nqn": "nqn.2016-06.io.spdk:cnode30587", 00:13:15.867 "min_cntlid": 0, 00:13:15.867 "method": "nvmf_create_subsystem", 00:13:15.867 "req_id": 1 00:13:15.867 } 00:13:15.867 Got JSON-RPC error response 00:13:15.867 response: 00:13:15.867 { 00:13:15.867 "code": -32602, 00:13:15.867 "message": "Invalid cntlid range [0-65519]" 00:13:15.867 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:15.867 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21787 -i 65520 00:13:16.128 [2024-11-06 13:09:57.771665] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21787: invalid cntlid range [65520-65519] 00:13:16.128 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:16.128 { 00:13:16.128 "nqn": "nqn.2016-06.io.spdk:cnode21787", 00:13:16.128 "min_cntlid": 65520, 00:13:16.128 "method": "nvmf_create_subsystem", 00:13:16.128 "req_id": 1 00:13:16.128 } 00:13:16.128 Got JSON-RPC error response 00:13:16.128 response: 00:13:16.128 { 00:13:16.128 "code": -32602, 00:13:16.128 "message": "Invalid cntlid range [65520-65519]" 00:13:16.128 }' 00:13:16.128 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:16.128 { 00:13:16.128 "nqn": "nqn.2016-06.io.spdk:cnode21787", 00:13:16.128 "min_cntlid": 65520, 00:13:16.128 "method": "nvmf_create_subsystem", 00:13:16.128 "req_id": 1 00:13:16.128 } 00:13:16.128 Got JSON-RPC error response 00:13:16.128 response: 00:13:16.128 { 00:13:16.128 "code": -32602, 00:13:16.128 "message": "Invalid cntlid range [65520-65519]" 00:13:16.128 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.128 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8750 -I 0 00:13:16.128 [2024-11-06 13:09:57.944216] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8750: invalid cntlid range [1-0] 00:13:16.128 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:16.128 { 00:13:16.128 "nqn": "nqn.2016-06.io.spdk:cnode8750", 00:13:16.128 "max_cntlid": 0, 00:13:16.128 "method": "nvmf_create_subsystem", 00:13:16.128 "req_id": 1 00:13:16.128 } 00:13:16.128 Got JSON-RPC error response 00:13:16.128 response: 00:13:16.128 { 00:13:16.128 "code": -32602, 00:13:16.128 "message": "Invalid cntlid range [1-0]" 00:13:16.128 }' 00:13:16.128 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:16.128 { 00:13:16.128 "nqn": "nqn.2016-06.io.spdk:cnode8750", 00:13:16.128 "max_cntlid": 0, 00:13:16.128 "method": "nvmf_create_subsystem", 00:13:16.128 "req_id": 1 00:13:16.128 } 00:13:16.128 Got JSON-RPC error response 00:13:16.128 response: 00:13:16.128 { 00:13:16.128 "code": -32602, 00:13:16.128 "message": "Invalid cntlid range [1-0]" 00:13:16.128 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.128 13:09:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23378 -I 65520 00:13:16.388 [2024-11-06 13:09:58.128794] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23378: invalid cntlid range [1-65520] 00:13:16.388 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:16.388 { 00:13:16.388 "nqn": "nqn.2016-06.io.spdk:cnode23378", 00:13:16.388 "max_cntlid": 65520, 00:13:16.388 "method": "nvmf_create_subsystem", 00:13:16.388 "req_id": 1 00:13:16.388 } 00:13:16.388 Got JSON-RPC error response 00:13:16.388 response: 00:13:16.388 { 00:13:16.388 "code": -32602, 00:13:16.388 "message": "Invalid cntlid range [1-65520]" 00:13:16.388 }' 00:13:16.388 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:16.388 { 00:13:16.388 "nqn": "nqn.2016-06.io.spdk:cnode23378", 00:13:16.388 "max_cntlid": 65520, 00:13:16.388 "method": "nvmf_create_subsystem", 00:13:16.388 "req_id": 1 00:13:16.388 } 00:13:16.388 Got JSON-RPC error response 00:13:16.388 response: 00:13:16.388 { 00:13:16.388 "code": -32602, 00:13:16.388 "message": "Invalid cntlid range [1-65520]" 00:13:16.388 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.388 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11459 -i 6 -I 5 00:13:16.648 [2024-11-06 13:09:58.313363] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11459: invalid cntlid range [6-5] 00:13:16.648 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:16.648 { 00:13:16.649 "nqn": "nqn.2016-06.io.spdk:cnode11459", 00:13:16.649 "min_cntlid": 6, 00:13:16.649 "max_cntlid": 5, 00:13:16.649 "method": "nvmf_create_subsystem", 00:13:16.649 "req_id": 1 00:13:16.649 } 00:13:16.649 Got JSON-RPC error response 00:13:16.649 response: 00:13:16.649 { 00:13:16.649 "code": -32602, 00:13:16.649 "message": "Invalid cntlid range [6-5]" 00:13:16.649 }' 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:16.649 { 00:13:16.649 "nqn": "nqn.2016-06.io.spdk:cnode11459", 00:13:16.649 "min_cntlid": 6, 00:13:16.649 "max_cntlid": 5, 00:13:16.649 "method": "nvmf_create_subsystem", 00:13:16.649 "req_id": 1 00:13:16.649 } 00:13:16.649 Got JSON-RPC error response 00:13:16.649 response: 00:13:16.649 { 00:13:16.649 "code": -32602, 00:13:16.649 "message": "Invalid cntlid range [6-5]" 00:13:16.649 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:16.649 { 00:13:16.649 "name": "foobar", 00:13:16.649 "method": "nvmf_delete_target", 00:13:16.649 "req_id": 1 00:13:16.649 } 00:13:16.649 Got JSON-RPC error response 00:13:16.649 response: 00:13:16.649 { 00:13:16.649 "code": -32602, 00:13:16.649 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:16.649 }' 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:16.649 { 00:13:16.649 "name": "foobar", 00:13:16.649 "method": "nvmf_delete_target", 00:13:16.649 "req_id": 1 00:13:16.649 } 00:13:16.649 Got JSON-RPC error response 00:13:16.649 response: 00:13:16.649 { 00:13:16.649 "code": -32602, 00:13:16.649 "message": "The specified target doesn't exist, cannot delete it." 00:13:16.649 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:16.649 rmmod nvme_tcp 00:13:16.649 rmmod nvme_fabrics 00:13:16.649 rmmod nvme_keyring 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1648918 ']' 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1648918 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 1648918 ']' 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 1648918 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:16.649 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1648918 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1648918' 00:13:16.909 killing process with pid 1648918 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 1648918 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 1648918 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.909 13:09:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.453 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.453 00:13:19.453 real 0m14.319s 00:13:19.453 user 0m21.230s 00:13:19.453 sys 0m6.856s 00:13:19.453 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.453 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.453 ************************************ 00:13:19.453 END TEST nvmf_invalid 00:13:19.453 ************************************ 00:13:19.454 13:10:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.454 13:10:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:19.454 13:10:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.454 13:10:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 ************************************ 00:13:19.454 START TEST nvmf_connect_stress 00:13:19.454 ************************************ 00:13:19.454 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:19.454 * Looking for test storage... 00:13:19.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.454 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:19.454 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:19.454 13:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:19.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.454 --rc genhtml_branch_coverage=1 00:13:19.454 --rc genhtml_function_coverage=1 00:13:19.454 --rc genhtml_legend=1 00:13:19.454 --rc geninfo_all_blocks=1 00:13:19.454 --rc geninfo_unexecuted_blocks=1 00:13:19.454 00:13:19.454 ' 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:19.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.454 --rc genhtml_branch_coverage=1 00:13:19.454 --rc genhtml_function_coverage=1 00:13:19.454 --rc genhtml_legend=1 00:13:19.454 --rc geninfo_all_blocks=1 00:13:19.454 --rc geninfo_unexecuted_blocks=1 00:13:19.454 00:13:19.454 ' 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:19.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.454 --rc genhtml_branch_coverage=1 00:13:19.454 --rc genhtml_function_coverage=1 00:13:19.454 --rc genhtml_legend=1 00:13:19.454 --rc geninfo_all_blocks=1 00:13:19.454 --rc geninfo_unexecuted_blocks=1 00:13:19.454 00:13:19.454 ' 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:19.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.454 --rc genhtml_branch_coverage=1 00:13:19.454 --rc genhtml_function_coverage=1 00:13:19.454 --rc genhtml_legend=1 00:13:19.454 --rc geninfo_all_blocks=1 00:13:19.454 --rc geninfo_unexecuted_blocks=1 00:13:19.454 00:13:19.454 ' 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.454 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:19.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:19.455 13:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.726 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:27.727 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:27.727 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:27.727 Found net devices under 0000:31:00.0: cvl_0_0 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:27.727 Found net devices under 0000:31:00.1: cvl_0_1 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:13:27.727 00:13:27.727 --- 10.0.0.2 ping statistics --- 00:13:27.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.727 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:13:27.727 00:13:27.727 --- 10.0.0.1 ping statistics --- 00:13:27.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.727 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1654241 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1654241 00:13:27.727 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:27.728 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 1654241 ']' 00:13:27.728 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.728 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.728 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.728 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.728 13:10:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.728 [2024-11-06 13:10:08.778507] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:13:27.728 [2024-11-06 13:10:08.778572] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.728 [2024-11-06 13:10:08.884276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:27.728 [2024-11-06 13:10:08.936322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.728 [2024-11-06 13:10:08.936381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.728 [2024-11-06 13:10:08.936390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.728 [2024-11-06 13:10:08.936397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.728 [2024-11-06 13:10:08.936404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.728 [2024-11-06 13:10:08.938575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.728 [2024-11-06 13:10:08.938731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.728 [2024-11-06 13:10:08.938732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.728 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.728 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:27.728 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.728 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.728 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.988 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 [2024-11-06 13:10:09.654448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 [2024-11-06 13:10:09.680165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 NULL1 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1654430 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 13:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.250 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.250 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:28.250 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.250 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.250 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.821 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.821 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:28.821 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.821 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.821 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.083 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.083 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:29.083 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.083 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.083 13:10:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.344 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.344 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:29.344 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.344 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.344 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.604 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.604 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:29.604 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.604 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.604 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.865 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.865 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:29.865 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.865 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.865 13:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.436 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.436 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:30.436 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.436 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.436 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.696 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.696 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:30.696 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.696 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.696 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.955 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.955 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:30.955 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.955 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.955 13:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.214 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.214 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:31.214 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.214 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.214 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.784 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.784 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:31.784 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.784 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.784 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.043 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.043 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:32.043 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.043 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.043 13:10:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.302 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.302 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:32.302 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.302 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.302 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.562 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.562 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:32.562 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.562 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.562 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.824 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.824 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:32.824 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.824 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.824 13:10:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.393 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.394 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:33.394 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.394 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.394 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.653 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.653 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:33.653 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.653 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.653 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.913 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.913 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:33.913 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.913 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.913 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.172 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.172 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:34.172 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.172 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.172 13:10:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.431 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.431 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:34.431 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.431 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.431 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.000 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.001 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:35.001 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.001 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.001 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.261 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.261 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:35.261 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.261 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.261 13:10:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.522 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.522 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:35.522 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.522 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.522 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.783 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.783 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:35.783 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.783 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.783 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.044 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.044 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:36.044 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.044 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.044 13:10:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.615 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.615 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:36.615 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.615 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.615 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.876 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.876 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:36.876 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.876 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.876 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.138 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.138 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:37.138 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.138 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.138 13:10:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.399 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.399 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:37.399 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.399 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.399 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.970 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.970 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:37.970 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.970 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.970 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.970 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1654430 00:13:38.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1654430) - No such process 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1654430 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:38.231 rmmod nvme_tcp 00:13:38.231 rmmod nvme_fabrics 00:13:38.231 rmmod nvme_keyring 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1654241 ']' 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1654241 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 1654241 ']' 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 1654241 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:38.231 13:10:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1654241 00:13:38.231 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:38.231 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:38.231 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1654241' 00:13:38.231 killing process with pid 1654241 00:13:38.231 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 1654241 00:13:38.231 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 1654241 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.492 13:10:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.406 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:40.406 00:13:40.406 real 0m21.372s 00:13:40.406 user 0m42.267s 00:13:40.406 sys 0m9.358s 00:13:40.406 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:40.406 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.406 ************************************ 00:13:40.406 END TEST nvmf_connect_stress 00:13:40.406 ************************************ 00:13:40.406 13:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:40.406 13:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:40.406 13:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:40.406 13:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.406 ************************************ 00:13:40.406 START TEST nvmf_fused_ordering 00:13:40.406 ************************************ 00:13:40.406 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:40.668 * Looking for test storage... 00:13:40.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:40.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.668 --rc genhtml_branch_coverage=1 00:13:40.668 --rc genhtml_function_coverage=1 00:13:40.668 --rc genhtml_legend=1 00:13:40.668 --rc geninfo_all_blocks=1 00:13:40.668 --rc geninfo_unexecuted_blocks=1 00:13:40.668 00:13:40.668 ' 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:40.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.668 --rc genhtml_branch_coverage=1 00:13:40.668 --rc genhtml_function_coverage=1 00:13:40.668 --rc genhtml_legend=1 00:13:40.668 --rc geninfo_all_blocks=1 00:13:40.668 --rc geninfo_unexecuted_blocks=1 00:13:40.668 00:13:40.668 ' 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:40.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.668 --rc genhtml_branch_coverage=1 00:13:40.668 --rc genhtml_function_coverage=1 00:13:40.668 --rc genhtml_legend=1 00:13:40.668 --rc geninfo_all_blocks=1 00:13:40.668 --rc geninfo_unexecuted_blocks=1 00:13:40.668 00:13:40.668 ' 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:40.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.668 --rc genhtml_branch_coverage=1 00:13:40.668 --rc genhtml_function_coverage=1 00:13:40.668 --rc genhtml_legend=1 00:13:40.668 --rc geninfo_all_blocks=1 00:13:40.668 --rc geninfo_unexecuted_blocks=1 00:13:40.668 00:13:40.668 ' 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.668 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:40.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:40.669 13:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.814 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.814 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.814 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.814 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.814 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:48.815 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:48.815 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:48.815 Found net devices under 0000:31:00.0: cvl_0_0 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:48.815 Found net devices under 0000:31:00.1: cvl_0_1 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.815 13:10:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.815 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.815 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:13:48.816 00:13:48.816 --- 10.0.0.2 ping statistics --- 00:13:48.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.816 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:13:48.816 00:13:48.816 --- 10.0.0.1 ping statistics --- 00:13:48.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.816 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1660821 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1660821 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 1660821 ']' 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:48.816 13:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.816 [2024-11-06 13:10:30.215537] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:13:48.816 [2024-11-06 13:10:30.215607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.816 [2024-11-06 13:10:30.315382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.816 [2024-11-06 13:10:30.365892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.816 [2024-11-06 13:10:30.365938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.816 [2024-11-06 13:10:30.365946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.816 [2024-11-06 13:10:30.365954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.816 [2024-11-06 13:10:30.365960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.816 [2024-11-06 13:10:30.366795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 [2024-11-06 13:10:31.080634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 [2024-11-06 13:10:31.104884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 NULL1 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.387 13:10:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:49.387 [2024-11-06 13:10:31.176366] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:13:49.387 [2024-11-06 13:10:31.176441] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1660860 ] 00:13:49.960 Attached to nqn.2016-06.io.spdk:cnode1 00:13:49.960 Namespace ID: 1 size: 1GB 00:13:49.960 fused_ordering(0) 00:13:49.960 fused_ordering(1) 00:13:49.960 fused_ordering(2) 00:13:49.960 fused_ordering(3) 00:13:49.960 fused_ordering(4) 00:13:49.960 fused_ordering(5) 00:13:49.960 fused_ordering(6) 00:13:49.960 fused_ordering(7) 00:13:49.960 fused_ordering(8) 00:13:49.960 fused_ordering(9) 00:13:49.960 fused_ordering(10) 00:13:49.960 fused_ordering(11) 00:13:49.960 fused_ordering(12) 00:13:49.960 fused_ordering(13) 00:13:49.960 fused_ordering(14) 00:13:49.960 fused_ordering(15) 00:13:49.960 fused_ordering(16) 00:13:49.960 fused_ordering(17) 00:13:49.960 fused_ordering(18) 00:13:49.960 fused_ordering(19) 00:13:49.960 fused_ordering(20) 00:13:49.960 fused_ordering(21) 00:13:49.960 fused_ordering(22) 00:13:49.960 fused_ordering(23) 00:13:49.960 fused_ordering(24) 00:13:49.960 fused_ordering(25) 00:13:49.960 fused_ordering(26) 00:13:49.960 fused_ordering(27) 00:13:49.960 fused_ordering(28) 00:13:49.960 fused_ordering(29) 00:13:49.960 fused_ordering(30) 00:13:49.960 fused_ordering(31) 00:13:49.960 fused_ordering(32) 00:13:49.960 fused_ordering(33) 00:13:49.960 fused_ordering(34) 00:13:49.960 fused_ordering(35) 00:13:49.960 fused_ordering(36) 00:13:49.960 fused_ordering(37) 00:13:49.960 fused_ordering(38) 00:13:49.960 fused_ordering(39) 00:13:49.960 fused_ordering(40) 00:13:49.960 fused_ordering(41) 00:13:49.960 fused_ordering(42) 00:13:49.960 fused_ordering(43) 00:13:49.960 fused_ordering(44) 00:13:49.960 fused_ordering(45) 00:13:49.960 fused_ordering(46) 00:13:49.960 fused_ordering(47) 00:13:49.960 fused_ordering(48) 00:13:49.960 fused_ordering(49) 00:13:49.960 fused_ordering(50) 00:13:49.960 fused_ordering(51) 00:13:49.960 fused_ordering(52) 00:13:49.960 fused_ordering(53) 00:13:49.960 fused_ordering(54) 00:13:49.960 fused_ordering(55) 00:13:49.960 fused_ordering(56) 00:13:49.960 fused_ordering(57) 00:13:49.960 fused_ordering(58) 00:13:49.960 fused_ordering(59) 00:13:49.960 fused_ordering(60) 00:13:49.960 fused_ordering(61) 00:13:49.960 fused_ordering(62) 00:13:49.960 fused_ordering(63) 00:13:49.960 fused_ordering(64) 00:13:49.960 fused_ordering(65) 00:13:49.960 fused_ordering(66) 00:13:49.960 fused_ordering(67) 00:13:49.960 fused_ordering(68) 00:13:49.960 fused_ordering(69) 00:13:49.960 fused_ordering(70) 00:13:49.960 fused_ordering(71) 00:13:49.960 fused_ordering(72) 00:13:49.960 fused_ordering(73) 00:13:49.960 fused_ordering(74) 00:13:49.960 fused_ordering(75) 00:13:49.960 fused_ordering(76) 00:13:49.960 fused_ordering(77) 00:13:49.960 fused_ordering(78) 00:13:49.960 fused_ordering(79) 00:13:49.960 fused_ordering(80) 00:13:49.960 fused_ordering(81) 00:13:49.960 fused_ordering(82) 00:13:49.960 fused_ordering(83) 00:13:49.960 fused_ordering(84) 00:13:49.960 fused_ordering(85) 00:13:49.960 fused_ordering(86) 00:13:49.960 fused_ordering(87) 00:13:49.960 fused_ordering(88) 00:13:49.960 fused_ordering(89) 00:13:49.960 fused_ordering(90) 00:13:49.960 fused_ordering(91) 00:13:49.960 fused_ordering(92) 00:13:49.960 fused_ordering(93) 00:13:49.960 fused_ordering(94) 00:13:49.960 fused_ordering(95) 00:13:49.960 fused_ordering(96) 00:13:49.960 fused_ordering(97) 00:13:49.960 fused_ordering(98) 00:13:49.960 fused_ordering(99) 00:13:49.960 fused_ordering(100) 00:13:49.960 fused_ordering(101) 00:13:49.960 fused_ordering(102) 00:13:49.960 fused_ordering(103) 00:13:49.960 fused_ordering(104) 00:13:49.960 fused_ordering(105) 00:13:49.960 fused_ordering(106) 00:13:49.960 fused_ordering(107) 00:13:49.960 fused_ordering(108) 00:13:49.960 fused_ordering(109) 00:13:49.960 fused_ordering(110) 00:13:49.960 fused_ordering(111) 00:13:49.960 fused_ordering(112) 00:13:49.960 fused_ordering(113) 00:13:49.960 fused_ordering(114) 00:13:49.960 fused_ordering(115) 00:13:49.961 fused_ordering(116) 00:13:49.961 fused_ordering(117) 00:13:49.961 fused_ordering(118) 00:13:49.961 fused_ordering(119) 00:13:49.961 fused_ordering(120) 00:13:49.961 fused_ordering(121) 00:13:49.961 fused_ordering(122) 00:13:49.961 fused_ordering(123) 00:13:49.961 fused_ordering(124) 00:13:49.961 fused_ordering(125) 00:13:49.961 fused_ordering(126) 00:13:49.961 fused_ordering(127) 00:13:49.961 fused_ordering(128) 00:13:49.961 fused_ordering(129) 00:13:49.961 fused_ordering(130) 00:13:49.961 fused_ordering(131) 00:13:49.961 fused_ordering(132) 00:13:49.961 fused_ordering(133) 00:13:49.961 fused_ordering(134) 00:13:49.961 fused_ordering(135) 00:13:49.961 fused_ordering(136) 00:13:49.961 fused_ordering(137) 00:13:49.961 fused_ordering(138) 00:13:49.961 fused_ordering(139) 00:13:49.961 fused_ordering(140) 00:13:49.961 fused_ordering(141) 00:13:49.961 fused_ordering(142) 00:13:49.961 fused_ordering(143) 00:13:49.961 fused_ordering(144) 00:13:49.961 fused_ordering(145) 00:13:49.961 fused_ordering(146) 00:13:49.961 fused_ordering(147) 00:13:49.961 fused_ordering(148) 00:13:49.961 fused_ordering(149) 00:13:49.961 fused_ordering(150) 00:13:49.961 fused_ordering(151) 00:13:49.961 fused_ordering(152) 00:13:49.961 fused_ordering(153) 00:13:49.961 fused_ordering(154) 00:13:49.961 fused_ordering(155) 00:13:49.961 fused_ordering(156) 00:13:49.961 fused_ordering(157) 00:13:49.961 fused_ordering(158) 00:13:49.961 fused_ordering(159) 00:13:49.961 fused_ordering(160) 00:13:49.961 fused_ordering(161) 00:13:49.961 fused_ordering(162) 00:13:49.961 fused_ordering(163) 00:13:49.961 fused_ordering(164) 00:13:49.961 fused_ordering(165) 00:13:49.961 fused_ordering(166) 00:13:49.961 fused_ordering(167) 00:13:49.961 fused_ordering(168) 00:13:49.961 fused_ordering(169) 00:13:49.961 fused_ordering(170) 00:13:49.961 fused_ordering(171) 00:13:49.961 fused_ordering(172) 00:13:49.961 fused_ordering(173) 00:13:49.961 fused_ordering(174) 00:13:49.961 fused_ordering(175) 00:13:49.961 fused_ordering(176) 00:13:49.961 fused_ordering(177) 00:13:49.961 fused_ordering(178) 00:13:49.961 fused_ordering(179) 00:13:49.961 fused_ordering(180) 00:13:49.961 fused_ordering(181) 00:13:49.961 fused_ordering(182) 00:13:49.961 fused_ordering(183) 00:13:49.961 fused_ordering(184) 00:13:49.961 fused_ordering(185) 00:13:49.961 fused_ordering(186) 00:13:49.961 fused_ordering(187) 00:13:49.961 fused_ordering(188) 00:13:49.961 fused_ordering(189) 00:13:49.961 fused_ordering(190) 00:13:49.961 fused_ordering(191) 00:13:49.961 fused_ordering(192) 00:13:49.961 fused_ordering(193) 00:13:49.961 fused_ordering(194) 00:13:49.961 fused_ordering(195) 00:13:49.961 fused_ordering(196) 00:13:49.961 fused_ordering(197) 00:13:49.961 fused_ordering(198) 00:13:49.961 fused_ordering(199) 00:13:49.961 fused_ordering(200) 00:13:49.961 fused_ordering(201) 00:13:49.961 fused_ordering(202) 00:13:49.961 fused_ordering(203) 00:13:49.961 fused_ordering(204) 00:13:49.961 fused_ordering(205) 00:13:50.222 fused_ordering(206) 00:13:50.222 fused_ordering(207) 00:13:50.222 fused_ordering(208) 00:13:50.222 fused_ordering(209) 00:13:50.222 fused_ordering(210) 00:13:50.222 fused_ordering(211) 00:13:50.222 fused_ordering(212) 00:13:50.222 fused_ordering(213) 00:13:50.222 fused_ordering(214) 00:13:50.222 fused_ordering(215) 00:13:50.222 fused_ordering(216) 00:13:50.222 fused_ordering(217) 00:13:50.222 fused_ordering(218) 00:13:50.222 fused_ordering(219) 00:13:50.222 fused_ordering(220) 00:13:50.222 fused_ordering(221) 00:13:50.222 fused_ordering(222) 00:13:50.222 fused_ordering(223) 00:13:50.222 fused_ordering(224) 00:13:50.222 fused_ordering(225) 00:13:50.222 fused_ordering(226) 00:13:50.223 fused_ordering(227) 00:13:50.223 fused_ordering(228) 00:13:50.223 fused_ordering(229) 00:13:50.223 fused_ordering(230) 00:13:50.223 fused_ordering(231) 00:13:50.223 fused_ordering(232) 00:13:50.223 fused_ordering(233) 00:13:50.223 fused_ordering(234) 00:13:50.223 fused_ordering(235) 00:13:50.223 fused_ordering(236) 00:13:50.223 fused_ordering(237) 00:13:50.223 fused_ordering(238) 00:13:50.223 fused_ordering(239) 00:13:50.223 fused_ordering(240) 00:13:50.223 fused_ordering(241) 00:13:50.223 fused_ordering(242) 00:13:50.223 fused_ordering(243) 00:13:50.223 fused_ordering(244) 00:13:50.223 fused_ordering(245) 00:13:50.223 fused_ordering(246) 00:13:50.223 fused_ordering(247) 00:13:50.223 fused_ordering(248) 00:13:50.223 fused_ordering(249) 00:13:50.223 fused_ordering(250) 00:13:50.223 fused_ordering(251) 00:13:50.223 fused_ordering(252) 00:13:50.223 fused_ordering(253) 00:13:50.223 fused_ordering(254) 00:13:50.223 fused_ordering(255) 00:13:50.223 fused_ordering(256) 00:13:50.223 fused_ordering(257) 00:13:50.223 fused_ordering(258) 00:13:50.223 fused_ordering(259) 00:13:50.223 fused_ordering(260) 00:13:50.223 fused_ordering(261) 00:13:50.223 fused_ordering(262) 00:13:50.223 fused_ordering(263) 00:13:50.223 fused_ordering(264) 00:13:50.223 fused_ordering(265) 00:13:50.223 fused_ordering(266) 00:13:50.223 fused_ordering(267) 00:13:50.223 fused_ordering(268) 00:13:50.223 fused_ordering(269) 00:13:50.223 fused_ordering(270) 00:13:50.223 fused_ordering(271) 00:13:50.223 fused_ordering(272) 00:13:50.223 fused_ordering(273) 00:13:50.223 fused_ordering(274) 00:13:50.223 fused_ordering(275) 00:13:50.223 fused_ordering(276) 00:13:50.223 fused_ordering(277) 00:13:50.223 fused_ordering(278) 00:13:50.223 fused_ordering(279) 00:13:50.223 fused_ordering(280) 00:13:50.223 fused_ordering(281) 00:13:50.223 fused_ordering(282) 00:13:50.223 fused_ordering(283) 00:13:50.223 fused_ordering(284) 00:13:50.223 fused_ordering(285) 00:13:50.223 fused_ordering(286) 00:13:50.223 fused_ordering(287) 00:13:50.223 fused_ordering(288) 00:13:50.223 fused_ordering(289) 00:13:50.223 fused_ordering(290) 00:13:50.223 fused_ordering(291) 00:13:50.223 fused_ordering(292) 00:13:50.223 fused_ordering(293) 00:13:50.223 fused_ordering(294) 00:13:50.223 fused_ordering(295) 00:13:50.223 fused_ordering(296) 00:13:50.223 fused_ordering(297) 00:13:50.223 fused_ordering(298) 00:13:50.223 fused_ordering(299) 00:13:50.223 fused_ordering(300) 00:13:50.223 fused_ordering(301) 00:13:50.223 fused_ordering(302) 00:13:50.223 fused_ordering(303) 00:13:50.223 fused_ordering(304) 00:13:50.223 fused_ordering(305) 00:13:50.223 fused_ordering(306) 00:13:50.223 fused_ordering(307) 00:13:50.223 fused_ordering(308) 00:13:50.223 fused_ordering(309) 00:13:50.223 fused_ordering(310) 00:13:50.223 fused_ordering(311) 00:13:50.223 fused_ordering(312) 00:13:50.223 fused_ordering(313) 00:13:50.223 fused_ordering(314) 00:13:50.223 fused_ordering(315) 00:13:50.223 fused_ordering(316) 00:13:50.223 fused_ordering(317) 00:13:50.223 fused_ordering(318) 00:13:50.223 fused_ordering(319) 00:13:50.223 fused_ordering(320) 00:13:50.223 fused_ordering(321) 00:13:50.223 fused_ordering(322) 00:13:50.223 fused_ordering(323) 00:13:50.223 fused_ordering(324) 00:13:50.223 fused_ordering(325) 00:13:50.223 fused_ordering(326) 00:13:50.223 fused_ordering(327) 00:13:50.223 fused_ordering(328) 00:13:50.223 fused_ordering(329) 00:13:50.223 fused_ordering(330) 00:13:50.223 fused_ordering(331) 00:13:50.223 fused_ordering(332) 00:13:50.223 fused_ordering(333) 00:13:50.223 fused_ordering(334) 00:13:50.223 fused_ordering(335) 00:13:50.223 fused_ordering(336) 00:13:50.223 fused_ordering(337) 00:13:50.223 fused_ordering(338) 00:13:50.223 fused_ordering(339) 00:13:50.223 fused_ordering(340) 00:13:50.223 fused_ordering(341) 00:13:50.223 fused_ordering(342) 00:13:50.223 fused_ordering(343) 00:13:50.223 fused_ordering(344) 00:13:50.223 fused_ordering(345) 00:13:50.223 fused_ordering(346) 00:13:50.223 fused_ordering(347) 00:13:50.223 fused_ordering(348) 00:13:50.223 fused_ordering(349) 00:13:50.223 fused_ordering(350) 00:13:50.223 fused_ordering(351) 00:13:50.223 fused_ordering(352) 00:13:50.223 fused_ordering(353) 00:13:50.223 fused_ordering(354) 00:13:50.223 fused_ordering(355) 00:13:50.223 fused_ordering(356) 00:13:50.223 fused_ordering(357) 00:13:50.223 fused_ordering(358) 00:13:50.223 fused_ordering(359) 00:13:50.223 fused_ordering(360) 00:13:50.223 fused_ordering(361) 00:13:50.223 fused_ordering(362) 00:13:50.223 fused_ordering(363) 00:13:50.223 fused_ordering(364) 00:13:50.223 fused_ordering(365) 00:13:50.223 fused_ordering(366) 00:13:50.223 fused_ordering(367) 00:13:50.223 fused_ordering(368) 00:13:50.223 fused_ordering(369) 00:13:50.223 fused_ordering(370) 00:13:50.223 fused_ordering(371) 00:13:50.223 fused_ordering(372) 00:13:50.223 fused_ordering(373) 00:13:50.223 fused_ordering(374) 00:13:50.223 fused_ordering(375) 00:13:50.223 fused_ordering(376) 00:13:50.223 fused_ordering(377) 00:13:50.223 fused_ordering(378) 00:13:50.223 fused_ordering(379) 00:13:50.223 fused_ordering(380) 00:13:50.223 fused_ordering(381) 00:13:50.223 fused_ordering(382) 00:13:50.223 fused_ordering(383) 00:13:50.223 fused_ordering(384) 00:13:50.223 fused_ordering(385) 00:13:50.223 fused_ordering(386) 00:13:50.223 fused_ordering(387) 00:13:50.223 fused_ordering(388) 00:13:50.223 fused_ordering(389) 00:13:50.223 fused_ordering(390) 00:13:50.223 fused_ordering(391) 00:13:50.223 fused_ordering(392) 00:13:50.223 fused_ordering(393) 00:13:50.223 fused_ordering(394) 00:13:50.223 fused_ordering(395) 00:13:50.223 fused_ordering(396) 00:13:50.223 fused_ordering(397) 00:13:50.223 fused_ordering(398) 00:13:50.223 fused_ordering(399) 00:13:50.223 fused_ordering(400) 00:13:50.223 fused_ordering(401) 00:13:50.223 fused_ordering(402) 00:13:50.223 fused_ordering(403) 00:13:50.223 fused_ordering(404) 00:13:50.223 fused_ordering(405) 00:13:50.223 fused_ordering(406) 00:13:50.223 fused_ordering(407) 00:13:50.223 fused_ordering(408) 00:13:50.223 fused_ordering(409) 00:13:50.223 fused_ordering(410) 00:13:50.796 fused_ordering(411) 00:13:50.796 fused_ordering(412) 00:13:50.796 fused_ordering(413) 00:13:50.796 fused_ordering(414) 00:13:50.796 fused_ordering(415) 00:13:50.796 fused_ordering(416) 00:13:50.796 fused_ordering(417) 00:13:50.796 fused_ordering(418) 00:13:50.796 fused_ordering(419) 00:13:50.796 fused_ordering(420) 00:13:50.796 fused_ordering(421) 00:13:50.796 fused_ordering(422) 00:13:50.796 fused_ordering(423) 00:13:50.796 fused_ordering(424) 00:13:50.796 fused_ordering(425) 00:13:50.796 fused_ordering(426) 00:13:50.796 fused_ordering(427) 00:13:50.796 fused_ordering(428) 00:13:50.796 fused_ordering(429) 00:13:50.796 fused_ordering(430) 00:13:50.796 fused_ordering(431) 00:13:50.796 fused_ordering(432) 00:13:50.796 fused_ordering(433) 00:13:50.796 fused_ordering(434) 00:13:50.796 fused_ordering(435) 00:13:50.796 fused_ordering(436) 00:13:50.796 fused_ordering(437) 00:13:50.796 fused_ordering(438) 00:13:50.796 fused_ordering(439) 00:13:50.796 fused_ordering(440) 00:13:50.796 fused_ordering(441) 00:13:50.796 fused_ordering(442) 00:13:50.796 fused_ordering(443) 00:13:50.796 fused_ordering(444) 00:13:50.796 fused_ordering(445) 00:13:50.796 fused_ordering(446) 00:13:50.796 fused_ordering(447) 00:13:50.796 fused_ordering(448) 00:13:50.796 fused_ordering(449) 00:13:50.796 fused_ordering(450) 00:13:50.796 fused_ordering(451) 00:13:50.796 fused_ordering(452) 00:13:50.796 fused_ordering(453) 00:13:50.796 fused_ordering(454) 00:13:50.796 fused_ordering(455) 00:13:50.796 fused_ordering(456) 00:13:50.796 fused_ordering(457) 00:13:50.796 fused_ordering(458) 00:13:50.796 fused_ordering(459) 00:13:50.796 fused_ordering(460) 00:13:50.796 fused_ordering(461) 00:13:50.796 fused_ordering(462) 00:13:50.796 fused_ordering(463) 00:13:50.796 fused_ordering(464) 00:13:50.796 fused_ordering(465) 00:13:50.796 fused_ordering(466) 00:13:50.796 fused_ordering(467) 00:13:50.796 fused_ordering(468) 00:13:50.796 fused_ordering(469) 00:13:50.796 fused_ordering(470) 00:13:50.796 fused_ordering(471) 00:13:50.796 fused_ordering(472) 00:13:50.796 fused_ordering(473) 00:13:50.796 fused_ordering(474) 00:13:50.796 fused_ordering(475) 00:13:50.796 fused_ordering(476) 00:13:50.796 fused_ordering(477) 00:13:50.796 fused_ordering(478) 00:13:50.796 fused_ordering(479) 00:13:50.796 fused_ordering(480) 00:13:50.796 fused_ordering(481) 00:13:50.796 fused_ordering(482) 00:13:50.796 fused_ordering(483) 00:13:50.796 fused_ordering(484) 00:13:50.796 fused_ordering(485) 00:13:50.796 fused_ordering(486) 00:13:50.796 fused_ordering(487) 00:13:50.796 fused_ordering(488) 00:13:50.796 fused_ordering(489) 00:13:50.796 fused_ordering(490) 00:13:50.796 fused_ordering(491) 00:13:50.796 fused_ordering(492) 00:13:50.796 fused_ordering(493) 00:13:50.796 fused_ordering(494) 00:13:50.797 fused_ordering(495) 00:13:50.797 fused_ordering(496) 00:13:50.797 fused_ordering(497) 00:13:50.797 fused_ordering(498) 00:13:50.797 fused_ordering(499) 00:13:50.797 fused_ordering(500) 00:13:50.797 fused_ordering(501) 00:13:50.797 fused_ordering(502) 00:13:50.797 fused_ordering(503) 00:13:50.797 fused_ordering(504) 00:13:50.797 fused_ordering(505) 00:13:50.797 fused_ordering(506) 00:13:50.797 fused_ordering(507) 00:13:50.797 fused_ordering(508) 00:13:50.797 fused_ordering(509) 00:13:50.797 fused_ordering(510) 00:13:50.797 fused_ordering(511) 00:13:50.797 fused_ordering(512) 00:13:50.797 fused_ordering(513) 00:13:50.797 fused_ordering(514) 00:13:50.797 fused_ordering(515) 00:13:50.797 fused_ordering(516) 00:13:50.797 fused_ordering(517) 00:13:50.797 fused_ordering(518) 00:13:50.797 fused_ordering(519) 00:13:50.797 fused_ordering(520) 00:13:50.797 fused_ordering(521) 00:13:50.797 fused_ordering(522) 00:13:50.797 fused_ordering(523) 00:13:50.797 fused_ordering(524) 00:13:50.797 fused_ordering(525) 00:13:50.797 fused_ordering(526) 00:13:50.797 fused_ordering(527) 00:13:50.797 fused_ordering(528) 00:13:50.797 fused_ordering(529) 00:13:50.797 fused_ordering(530) 00:13:50.797 fused_ordering(531) 00:13:50.797 fused_ordering(532) 00:13:50.797 fused_ordering(533) 00:13:50.797 fused_ordering(534) 00:13:50.797 fused_ordering(535) 00:13:50.797 fused_ordering(536) 00:13:50.797 fused_ordering(537) 00:13:50.797 fused_ordering(538) 00:13:50.797 fused_ordering(539) 00:13:50.797 fused_ordering(540) 00:13:50.797 fused_ordering(541) 00:13:50.797 fused_ordering(542) 00:13:50.797 fused_ordering(543) 00:13:50.797 fused_ordering(544) 00:13:50.797 fused_ordering(545) 00:13:50.797 fused_ordering(546) 00:13:50.797 fused_ordering(547) 00:13:50.797 fused_ordering(548) 00:13:50.797 fused_ordering(549) 00:13:50.797 fused_ordering(550) 00:13:50.797 fused_ordering(551) 00:13:50.797 fused_ordering(552) 00:13:50.797 fused_ordering(553) 00:13:50.797 fused_ordering(554) 00:13:50.797 fused_ordering(555) 00:13:50.797 fused_ordering(556) 00:13:50.797 fused_ordering(557) 00:13:50.797 fused_ordering(558) 00:13:50.797 fused_ordering(559) 00:13:50.797 fused_ordering(560) 00:13:50.797 fused_ordering(561) 00:13:50.797 fused_ordering(562) 00:13:50.797 fused_ordering(563) 00:13:50.797 fused_ordering(564) 00:13:50.797 fused_ordering(565) 00:13:50.797 fused_ordering(566) 00:13:50.797 fused_ordering(567) 00:13:50.797 fused_ordering(568) 00:13:50.797 fused_ordering(569) 00:13:50.797 fused_ordering(570) 00:13:50.797 fused_ordering(571) 00:13:50.797 fused_ordering(572) 00:13:50.797 fused_ordering(573) 00:13:50.797 fused_ordering(574) 00:13:50.797 fused_ordering(575) 00:13:50.797 fused_ordering(576) 00:13:50.797 fused_ordering(577) 00:13:50.797 fused_ordering(578) 00:13:50.797 fused_ordering(579) 00:13:50.797 fused_ordering(580) 00:13:50.797 fused_ordering(581) 00:13:50.797 fused_ordering(582) 00:13:50.797 fused_ordering(583) 00:13:50.797 fused_ordering(584) 00:13:50.797 fused_ordering(585) 00:13:50.797 fused_ordering(586) 00:13:50.797 fused_ordering(587) 00:13:50.797 fused_ordering(588) 00:13:50.797 fused_ordering(589) 00:13:50.797 fused_ordering(590) 00:13:50.797 fused_ordering(591) 00:13:50.797 fused_ordering(592) 00:13:50.797 fused_ordering(593) 00:13:50.797 fused_ordering(594) 00:13:50.797 fused_ordering(595) 00:13:50.797 fused_ordering(596) 00:13:50.797 fused_ordering(597) 00:13:50.797 fused_ordering(598) 00:13:50.797 fused_ordering(599) 00:13:50.797 fused_ordering(600) 00:13:50.797 fused_ordering(601) 00:13:50.797 fused_ordering(602) 00:13:50.797 fused_ordering(603) 00:13:50.797 fused_ordering(604) 00:13:50.797 fused_ordering(605) 00:13:50.797 fused_ordering(606) 00:13:50.797 fused_ordering(607) 00:13:50.797 fused_ordering(608) 00:13:50.797 fused_ordering(609) 00:13:50.797 fused_ordering(610) 00:13:50.797 fused_ordering(611) 00:13:50.797 fused_ordering(612) 00:13:50.797 fused_ordering(613) 00:13:50.797 fused_ordering(614) 00:13:50.797 fused_ordering(615) 00:13:51.371 fused_ordering(616) 00:13:51.371 fused_ordering(617) 00:13:51.371 fused_ordering(618) 00:13:51.371 fused_ordering(619) 00:13:51.371 fused_ordering(620) 00:13:51.371 fused_ordering(621) 00:13:51.371 fused_ordering(622) 00:13:51.371 fused_ordering(623) 00:13:51.371 fused_ordering(624) 00:13:51.371 fused_ordering(625) 00:13:51.371 fused_ordering(626) 00:13:51.371 fused_ordering(627) 00:13:51.371 fused_ordering(628) 00:13:51.371 fused_ordering(629) 00:13:51.371 fused_ordering(630) 00:13:51.371 fused_ordering(631) 00:13:51.371 fused_ordering(632) 00:13:51.371 fused_ordering(633) 00:13:51.371 fused_ordering(634) 00:13:51.371 fused_ordering(635) 00:13:51.371 fused_ordering(636) 00:13:51.371 fused_ordering(637) 00:13:51.371 fused_ordering(638) 00:13:51.371 fused_ordering(639) 00:13:51.371 fused_ordering(640) 00:13:51.371 fused_ordering(641) 00:13:51.371 fused_ordering(642) 00:13:51.371 fused_ordering(643) 00:13:51.371 fused_ordering(644) 00:13:51.371 fused_ordering(645) 00:13:51.371 fused_ordering(646) 00:13:51.371 fused_ordering(647) 00:13:51.371 fused_ordering(648) 00:13:51.371 fused_ordering(649) 00:13:51.371 fused_ordering(650) 00:13:51.371 fused_ordering(651) 00:13:51.371 fused_ordering(652) 00:13:51.371 fused_ordering(653) 00:13:51.371 fused_ordering(654) 00:13:51.371 fused_ordering(655) 00:13:51.371 fused_ordering(656) 00:13:51.371 fused_ordering(657) 00:13:51.371 fused_ordering(658) 00:13:51.371 fused_ordering(659) 00:13:51.371 fused_ordering(660) 00:13:51.371 fused_ordering(661) 00:13:51.371 fused_ordering(662) 00:13:51.371 fused_ordering(663) 00:13:51.371 fused_ordering(664) 00:13:51.371 fused_ordering(665) 00:13:51.371 fused_ordering(666) 00:13:51.371 fused_ordering(667) 00:13:51.371 fused_ordering(668) 00:13:51.371 fused_ordering(669) 00:13:51.371 fused_ordering(670) 00:13:51.371 fused_ordering(671) 00:13:51.371 fused_ordering(672) 00:13:51.371 fused_ordering(673) 00:13:51.371 fused_ordering(674) 00:13:51.371 fused_ordering(675) 00:13:51.371 fused_ordering(676) 00:13:51.371 fused_ordering(677) 00:13:51.371 fused_ordering(678) 00:13:51.371 fused_ordering(679) 00:13:51.371 fused_ordering(680) 00:13:51.371 fused_ordering(681) 00:13:51.371 fused_ordering(682) 00:13:51.371 fused_ordering(683) 00:13:51.371 fused_ordering(684) 00:13:51.371 fused_ordering(685) 00:13:51.371 fused_ordering(686) 00:13:51.371 fused_ordering(687) 00:13:51.371 fused_ordering(688) 00:13:51.371 fused_ordering(689) 00:13:51.371 fused_ordering(690) 00:13:51.371 fused_ordering(691) 00:13:51.371 fused_ordering(692) 00:13:51.371 fused_ordering(693) 00:13:51.371 fused_ordering(694) 00:13:51.371 fused_ordering(695) 00:13:51.371 fused_ordering(696) 00:13:51.371 fused_ordering(697) 00:13:51.371 fused_ordering(698) 00:13:51.371 fused_ordering(699) 00:13:51.371 fused_ordering(700) 00:13:51.371 fused_ordering(701) 00:13:51.371 fused_ordering(702) 00:13:51.371 fused_ordering(703) 00:13:51.371 fused_ordering(704) 00:13:51.371 fused_ordering(705) 00:13:51.371 fused_ordering(706) 00:13:51.371 fused_ordering(707) 00:13:51.371 fused_ordering(708) 00:13:51.371 fused_ordering(709) 00:13:51.371 fused_ordering(710) 00:13:51.371 fused_ordering(711) 00:13:51.371 fused_ordering(712) 00:13:51.371 fused_ordering(713) 00:13:51.371 fused_ordering(714) 00:13:51.371 fused_ordering(715) 00:13:51.371 fused_ordering(716) 00:13:51.371 fused_ordering(717) 00:13:51.371 fused_ordering(718) 00:13:51.371 fused_ordering(719) 00:13:51.371 fused_ordering(720) 00:13:51.371 fused_ordering(721) 00:13:51.371 fused_ordering(722) 00:13:51.371 fused_ordering(723) 00:13:51.371 fused_ordering(724) 00:13:51.371 fused_ordering(725) 00:13:51.371 fused_ordering(726) 00:13:51.371 fused_ordering(727) 00:13:51.371 fused_ordering(728) 00:13:51.371 fused_ordering(729) 00:13:51.371 fused_ordering(730) 00:13:51.371 fused_ordering(731) 00:13:51.371 fused_ordering(732) 00:13:51.371 fused_ordering(733) 00:13:51.371 fused_ordering(734) 00:13:51.371 fused_ordering(735) 00:13:51.371 fused_ordering(736) 00:13:51.371 fused_ordering(737) 00:13:51.371 fused_ordering(738) 00:13:51.371 fused_ordering(739) 00:13:51.371 fused_ordering(740) 00:13:51.371 fused_ordering(741) 00:13:51.371 fused_ordering(742) 00:13:51.371 fused_ordering(743) 00:13:51.371 fused_ordering(744) 00:13:51.371 fused_ordering(745) 00:13:51.371 fused_ordering(746) 00:13:51.371 fused_ordering(747) 00:13:51.371 fused_ordering(748) 00:13:51.372 fused_ordering(749) 00:13:51.372 fused_ordering(750) 00:13:51.372 fused_ordering(751) 00:13:51.372 fused_ordering(752) 00:13:51.372 fused_ordering(753) 00:13:51.372 fused_ordering(754) 00:13:51.372 fused_ordering(755) 00:13:51.372 fused_ordering(756) 00:13:51.372 fused_ordering(757) 00:13:51.372 fused_ordering(758) 00:13:51.372 fused_ordering(759) 00:13:51.372 fused_ordering(760) 00:13:51.372 fused_ordering(761) 00:13:51.372 fused_ordering(762) 00:13:51.372 fused_ordering(763) 00:13:51.372 fused_ordering(764) 00:13:51.372 fused_ordering(765) 00:13:51.372 fused_ordering(766) 00:13:51.372 fused_ordering(767) 00:13:51.372 fused_ordering(768) 00:13:51.372 fused_ordering(769) 00:13:51.372 fused_ordering(770) 00:13:51.372 fused_ordering(771) 00:13:51.372 fused_ordering(772) 00:13:51.372 fused_ordering(773) 00:13:51.372 fused_ordering(774) 00:13:51.372 fused_ordering(775) 00:13:51.372 fused_ordering(776) 00:13:51.372 fused_ordering(777) 00:13:51.372 fused_ordering(778) 00:13:51.372 fused_ordering(779) 00:13:51.372 fused_ordering(780) 00:13:51.372 fused_ordering(781) 00:13:51.372 fused_ordering(782) 00:13:51.372 fused_ordering(783) 00:13:51.372 fused_ordering(784) 00:13:51.372 fused_ordering(785) 00:13:51.372 fused_ordering(786) 00:13:51.372 fused_ordering(787) 00:13:51.372 fused_ordering(788) 00:13:51.372 fused_ordering(789) 00:13:51.372 fused_ordering(790) 00:13:51.372 fused_ordering(791) 00:13:51.372 fused_ordering(792) 00:13:51.372 fused_ordering(793) 00:13:51.372 fused_ordering(794) 00:13:51.372 fused_ordering(795) 00:13:51.372 fused_ordering(796) 00:13:51.372 fused_ordering(797) 00:13:51.372 fused_ordering(798) 00:13:51.372 fused_ordering(799) 00:13:51.372 fused_ordering(800) 00:13:51.372 fused_ordering(801) 00:13:51.372 fused_ordering(802) 00:13:51.372 fused_ordering(803) 00:13:51.372 fused_ordering(804) 00:13:51.372 fused_ordering(805) 00:13:51.372 fused_ordering(806) 00:13:51.372 fused_ordering(807) 00:13:51.372 fused_ordering(808) 00:13:51.372 fused_ordering(809) 00:13:51.372 fused_ordering(810) 00:13:51.372 fused_ordering(811) 00:13:51.372 fused_ordering(812) 00:13:51.372 fused_ordering(813) 00:13:51.372 fused_ordering(814) 00:13:51.372 fused_ordering(815) 00:13:51.372 fused_ordering(816) 00:13:51.372 fused_ordering(817) 00:13:51.372 fused_ordering(818) 00:13:51.372 fused_ordering(819) 00:13:51.372 fused_ordering(820) 00:13:51.945 fused_ordering(821) 00:13:51.945 fused_ordering(822) 00:13:51.945 fused_ordering(823) 00:13:51.945 fused_ordering(824) 00:13:51.945 fused_ordering(825) 00:13:51.945 fused_ordering(826) 00:13:51.945 fused_ordering(827) 00:13:51.945 fused_ordering(828) 00:13:51.945 fused_ordering(829) 00:13:51.945 fused_ordering(830) 00:13:51.945 fused_ordering(831) 00:13:51.945 fused_ordering(832) 00:13:51.945 fused_ordering(833) 00:13:51.945 fused_ordering(834) 00:13:51.945 fused_ordering(835) 00:13:51.945 fused_ordering(836) 00:13:51.945 fused_ordering(837) 00:13:51.945 fused_ordering(838) 00:13:51.945 fused_ordering(839) 00:13:51.945 fused_ordering(840) 00:13:51.945 fused_ordering(841) 00:13:51.945 fused_ordering(842) 00:13:51.945 fused_ordering(843) 00:13:51.945 fused_ordering(844) 00:13:51.945 fused_ordering(845) 00:13:51.945 fused_ordering(846) 00:13:51.945 fused_ordering(847) 00:13:51.945 fused_ordering(848) 00:13:51.945 fused_ordering(849) 00:13:51.945 fused_ordering(850) 00:13:51.945 fused_ordering(851) 00:13:51.945 fused_ordering(852) 00:13:51.945 fused_ordering(853) 00:13:51.945 fused_ordering(854) 00:13:51.945 fused_ordering(855) 00:13:51.945 fused_ordering(856) 00:13:51.945 fused_ordering(857) 00:13:51.945 fused_ordering(858) 00:13:51.945 fused_ordering(859) 00:13:51.945 fused_ordering(860) 00:13:51.945 fused_ordering(861) 00:13:51.945 fused_ordering(862) 00:13:51.945 fused_ordering(863) 00:13:51.945 fused_ordering(864) 00:13:51.945 fused_ordering(865) 00:13:51.945 fused_ordering(866) 00:13:51.945 fused_ordering(867) 00:13:51.945 fused_ordering(868) 00:13:51.945 fused_ordering(869) 00:13:51.945 fused_ordering(870) 00:13:51.945 fused_ordering(871) 00:13:51.945 fused_ordering(872) 00:13:51.945 fused_ordering(873) 00:13:51.945 fused_ordering(874) 00:13:51.945 fused_ordering(875) 00:13:51.945 fused_ordering(876) 00:13:51.945 fused_ordering(877) 00:13:51.945 fused_ordering(878) 00:13:51.945 fused_ordering(879) 00:13:51.945 fused_ordering(880) 00:13:51.945 fused_ordering(881) 00:13:51.945 fused_ordering(882) 00:13:51.945 fused_ordering(883) 00:13:51.945 fused_ordering(884) 00:13:51.945 fused_ordering(885) 00:13:51.945 fused_ordering(886) 00:13:51.945 fused_ordering(887) 00:13:51.945 fused_ordering(888) 00:13:51.946 fused_ordering(889) 00:13:51.946 fused_ordering(890) 00:13:51.946 fused_ordering(891) 00:13:51.946 fused_ordering(892) 00:13:51.946 fused_ordering(893) 00:13:51.946 fused_ordering(894) 00:13:51.946 fused_ordering(895) 00:13:51.946 fused_ordering(896) 00:13:51.946 fused_ordering(897) 00:13:51.946 fused_ordering(898) 00:13:51.946 fused_ordering(899) 00:13:51.946 fused_ordering(900) 00:13:51.946 fused_ordering(901) 00:13:51.946 fused_ordering(902) 00:13:51.946 fused_ordering(903) 00:13:51.946 fused_ordering(904) 00:13:51.946 fused_ordering(905) 00:13:51.946 fused_ordering(906) 00:13:51.946 fused_ordering(907) 00:13:51.946 fused_ordering(908) 00:13:51.946 fused_ordering(909) 00:13:51.946 fused_ordering(910) 00:13:51.946 fused_ordering(911) 00:13:51.946 fused_ordering(912) 00:13:51.946 fused_ordering(913) 00:13:51.946 fused_ordering(914) 00:13:51.946 fused_ordering(915) 00:13:51.946 fused_ordering(916) 00:13:51.946 fused_ordering(917) 00:13:51.946 fused_ordering(918) 00:13:51.946 fused_ordering(919) 00:13:51.946 fused_ordering(920) 00:13:51.946 fused_ordering(921) 00:13:51.946 fused_ordering(922) 00:13:51.946 fused_ordering(923) 00:13:51.946 fused_ordering(924) 00:13:51.946 fused_ordering(925) 00:13:51.946 fused_ordering(926) 00:13:51.946 fused_ordering(927) 00:13:51.946 fused_ordering(928) 00:13:51.946 fused_ordering(929) 00:13:51.946 fused_ordering(930) 00:13:51.946 fused_ordering(931) 00:13:51.946 fused_ordering(932) 00:13:51.946 fused_ordering(933) 00:13:51.946 fused_ordering(934) 00:13:51.946 fused_ordering(935) 00:13:51.946 fused_ordering(936) 00:13:51.946 fused_ordering(937) 00:13:51.946 fused_ordering(938) 00:13:51.946 fused_ordering(939) 00:13:51.946 fused_ordering(940) 00:13:51.946 fused_ordering(941) 00:13:51.946 fused_ordering(942) 00:13:51.946 fused_ordering(943) 00:13:51.946 fused_ordering(944) 00:13:51.946 fused_ordering(945) 00:13:51.946 fused_ordering(946) 00:13:51.946 fused_ordering(947) 00:13:51.946 fused_ordering(948) 00:13:51.946 fused_ordering(949) 00:13:51.946 fused_ordering(950) 00:13:51.946 fused_ordering(951) 00:13:51.946 fused_ordering(952) 00:13:51.946 fused_ordering(953) 00:13:51.946 fused_ordering(954) 00:13:51.946 fused_ordering(955) 00:13:51.946 fused_ordering(956) 00:13:51.946 fused_ordering(957) 00:13:51.946 fused_ordering(958) 00:13:51.946 fused_ordering(959) 00:13:51.946 fused_ordering(960) 00:13:51.946 fused_ordering(961) 00:13:51.946 fused_ordering(962) 00:13:51.946 fused_ordering(963) 00:13:51.946 fused_ordering(964) 00:13:51.946 fused_ordering(965) 00:13:51.946 fused_ordering(966) 00:13:51.946 fused_ordering(967) 00:13:51.946 fused_ordering(968) 00:13:51.946 fused_ordering(969) 00:13:51.946 fused_ordering(970) 00:13:51.946 fused_ordering(971) 00:13:51.946 fused_ordering(972) 00:13:51.946 fused_ordering(973) 00:13:51.946 fused_ordering(974) 00:13:51.946 fused_ordering(975) 00:13:51.946 fused_ordering(976) 00:13:51.946 fused_ordering(977) 00:13:51.946 fused_ordering(978) 00:13:51.946 fused_ordering(979) 00:13:51.946 fused_ordering(980) 00:13:51.946 fused_ordering(981) 00:13:51.946 fused_ordering(982) 00:13:51.946 fused_ordering(983) 00:13:51.946 fused_ordering(984) 00:13:51.946 fused_ordering(985) 00:13:51.946 fused_ordering(986) 00:13:51.946 fused_ordering(987) 00:13:51.946 fused_ordering(988) 00:13:51.946 fused_ordering(989) 00:13:51.946 fused_ordering(990) 00:13:51.946 fused_ordering(991) 00:13:51.946 fused_ordering(992) 00:13:51.946 fused_ordering(993) 00:13:51.946 fused_ordering(994) 00:13:51.946 fused_ordering(995) 00:13:51.946 fused_ordering(996) 00:13:51.946 fused_ordering(997) 00:13:51.946 fused_ordering(998) 00:13:51.946 fused_ordering(999) 00:13:51.946 fused_ordering(1000) 00:13:51.946 fused_ordering(1001) 00:13:51.946 fused_ordering(1002) 00:13:51.946 fused_ordering(1003) 00:13:51.946 fused_ordering(1004) 00:13:51.946 fused_ordering(1005) 00:13:51.946 fused_ordering(1006) 00:13:51.946 fused_ordering(1007) 00:13:51.946 fused_ordering(1008) 00:13:51.946 fused_ordering(1009) 00:13:51.946 fused_ordering(1010) 00:13:51.946 fused_ordering(1011) 00:13:51.946 fused_ordering(1012) 00:13:51.946 fused_ordering(1013) 00:13:51.946 fused_ordering(1014) 00:13:51.946 fused_ordering(1015) 00:13:51.946 fused_ordering(1016) 00:13:51.946 fused_ordering(1017) 00:13:51.946 fused_ordering(1018) 00:13:51.946 fused_ordering(1019) 00:13:51.946 fused_ordering(1020) 00:13:51.946 fused_ordering(1021) 00:13:51.946 fused_ordering(1022) 00:13:51.946 fused_ordering(1023) 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.946 rmmod nvme_tcp 00:13:51.946 rmmod nvme_fabrics 00:13:51.946 rmmod nvme_keyring 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1660821 ']' 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1660821 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 1660821 ']' 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 1660821 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.946 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1660821 00:13:52.207 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:52.207 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:52.207 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1660821' 00:13:52.207 killing process with pid 1660821 00:13:52.207 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 1660821 00:13:52.207 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 1660821 00:13:52.207 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.207 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.207 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.207 13:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:52.207 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:52.207 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.207 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.207 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.207 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.207 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.207 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.207 13:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.756 00:13:54.756 real 0m13.785s 00:13:54.756 user 0m7.316s 00:13:54.756 sys 0m7.454s 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.756 ************************************ 00:13:54.756 END TEST nvmf_fused_ordering 00:13:54.756 ************************************ 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.756 ************************************ 00:13:54.756 START TEST nvmf_ns_masking 00:13:54.756 ************************************ 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:54.756 * Looking for test storage... 00:13:54.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.756 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:54.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.757 --rc genhtml_branch_coverage=1 00:13:54.757 --rc genhtml_function_coverage=1 00:13:54.757 --rc genhtml_legend=1 00:13:54.757 --rc geninfo_all_blocks=1 00:13:54.757 --rc geninfo_unexecuted_blocks=1 00:13:54.757 00:13:54.757 ' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:54.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.757 --rc genhtml_branch_coverage=1 00:13:54.757 --rc genhtml_function_coverage=1 00:13:54.757 --rc genhtml_legend=1 00:13:54.757 --rc geninfo_all_blocks=1 00:13:54.757 --rc geninfo_unexecuted_blocks=1 00:13:54.757 00:13:54.757 ' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:54.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.757 --rc genhtml_branch_coverage=1 00:13:54.757 --rc genhtml_function_coverage=1 00:13:54.757 --rc genhtml_legend=1 00:13:54.757 --rc geninfo_all_blocks=1 00:13:54.757 --rc geninfo_unexecuted_blocks=1 00:13:54.757 00:13:54.757 ' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:54.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.757 --rc genhtml_branch_coverage=1 00:13:54.757 --rc genhtml_function_coverage=1 00:13:54.757 --rc genhtml_legend=1 00:13:54.757 --rc geninfo_all_blocks=1 00:13:54.757 --rc geninfo_unexecuted_blocks=1 00:13:54.757 00:13:54.757 ' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e7d2b05f-c43f-4890-9498-fe4646f8db3e 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=baee8ffd-d84f-4462-ba96-5b7cacfa5e23 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0fd92872-1c4a-45f6-8d29-c7726d07f831 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.757 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.758 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.758 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.758 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.758 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.758 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.758 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.758 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.758 13:10:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.903 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.903 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.903 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.903 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.903 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:02.904 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:02.904 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:02.904 Found net devices under 0000:31:00.0: cvl_0_0 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:02.904 Found net devices under 0000:31:00.1: cvl_0_1 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:14:02.904 00:14:02.904 --- 10.0.0.2 ping statistics --- 00:14:02.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.904 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:14:02.904 00:14:02.904 --- 10.0.0.1 ping statistics --- 00:14:02.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.904 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.904 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1665676 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1665676 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1665676 ']' 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:02.905 13:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.905 [2024-11-06 13:10:44.053897] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:14:02.905 [2024-11-06 13:10:44.053962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.905 [2024-11-06 13:10:44.154301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.905 [2024-11-06 13:10:44.205838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.905 [2024-11-06 13:10:44.205891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.905 [2024-11-06 13:10:44.205900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.905 [2024-11-06 13:10:44.205908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.905 [2024-11-06 13:10:44.205914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.905 [2024-11-06 13:10:44.206697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.165 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:03.165 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:03.166 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.166 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.166 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.166 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.166 13:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:03.427 [2024-11-06 13:10:45.085427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.427 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:03.427 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:03.427 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:03.427 Malloc1 00:14:03.688 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:03.688 Malloc2 00:14:03.688 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:03.949 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:04.210 13:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.210 [2024-11-06 13:10:46.106068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.471 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:04.471 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0fd92872-1c4a-45f6-8d29-c7726d07f831 -a 10.0.0.2 -s 4420 -i 4 00:14:04.471 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.471 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:04.471 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.471 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:04.471 13:10:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.016 [ 0]:0x1 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3a1f1d2767404d27afedf5b222d606a7 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3a1f1d2767404d27afedf5b222d606a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.016 [ 0]:0x1 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3a1f1d2767404d27afedf5b222d606a7 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3a1f1d2767404d27afedf5b222d606a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.016 [ 1]:0x2 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07ee65589c504145a866e82c69198da5 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07ee65589c504145a866e82c69198da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.016 13:10:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.277 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:07.537 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:07.537 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0fd92872-1c4a-45f6-8d29-c7726d07f831 -a 10.0.0.2 -s 4420 -i 4 00:14:07.537 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:07.537 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:07.537 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.537 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:07.537 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:07.537 13:10:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:09.450 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:09.450 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:09.450 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:09.711 [ 0]:0x2 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07ee65589c504145a866e82c69198da5 00:14:09.711 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07ee65589c504145a866e82c69198da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.712 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:09.973 [ 0]:0x1 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3a1f1d2767404d27afedf5b222d606a7 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3a1f1d2767404d27afedf5b222d606a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.973 [ 1]:0x2 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07ee65589c504145a866e82c69198da5 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07ee65589c504145a866e82c69198da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.973 13:10:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.234 [ 0]:0x2 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.234 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.494 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07ee65589c504145a866e82c69198da5 00:14:10.494 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07ee65589c504145a866e82c69198da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.494 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:10.494 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.494 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:10.755 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:10.755 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0fd92872-1c4a-45f6-8d29-c7726d07f831 -a 10.0.0.2 -s 4420 -i 4 00:14:10.755 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:10.755 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:10.755 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.755 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:10.755 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:10.755 13:10:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:13.301 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.302 [ 0]:0x1 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3a1f1d2767404d27afedf5b222d606a7 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3a1f1d2767404d27afedf5b222d606a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:13.302 [ 1]:0x2 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07ee65589c504145a866e82c69198da5 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07ee65589c504145a866e82c69198da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.302 13:10:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.302 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:13.563 [ 0]:0x2 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07ee65589c504145a866e82c69198da5 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07ee65589c504145a866e82c69198da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:13.563 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:13.563 [2024-11-06 13:10:55.435679] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:13.563 request: 00:14:13.563 { 00:14:13.563 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.563 "nsid": 2, 00:14:13.563 "host": "nqn.2016-06.io.spdk:host1", 00:14:13.563 "method": "nvmf_ns_remove_host", 00:14:13.563 "req_id": 1 00:14:13.563 } 00:14:13.563 Got JSON-RPC error response 00:14:13.563 response: 00:14:13.563 { 00:14:13.563 "code": -32602, 00:14:13.563 "message": "Invalid parameters" 00:14:13.563 } 00:14:13.823 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:13.824 [ 0]:0x2 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07ee65589c504145a866e82c69198da5 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07ee65589c504145a866e82c69198da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1668063 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1668063 /var/tmp/host.sock 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1668063 ']' 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:13.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:13.824 13:10:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:13.824 [2024-11-06 13:10:55.701310] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:14:13.824 [2024-11-06 13:10:55.701361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668063 ] 00:14:14.085 [2024-11-06 13:10:55.789134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.085 [2024-11-06 13:10:55.825450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.655 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:14.655 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:14.655 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.916 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:15.177 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e7d2b05f-c43f-4890-9498-fe4646f8db3e 00:14:15.177 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:15.177 13:10:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E7D2B05FC43F48909498FE4646F8DB3E -i 00:14:15.177 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid baee8ffd-d84f-4462-ba96-5b7cacfa5e23 00:14:15.177 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:15.177 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BAEE8FFDD84F4462BA965B7CACFA5E23 -i 00:14:15.438 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:15.699 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:15.960 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:15.960 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:16.221 nvme0n1 00:14:16.221 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:16.221 13:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:16.221 nvme1n2 00:14:16.482 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:16.482 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:16.482 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:16.482 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:16.482 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:16.482 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:16.482 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:16.482 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:16.482 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:16.743 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e7d2b05f-c43f-4890-9498-fe4646f8db3e == \e\7\d\2\b\0\5\f\-\c\4\3\f\-\4\8\9\0\-\9\4\9\8\-\f\e\4\6\4\6\f\8\d\b\3\e ]] 00:14:16.743 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:16.743 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:16.743 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:17.004 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ baee8ffd-d84f-4462-ba96-5b7cacfa5e23 == \b\a\e\e\8\f\f\d\-\d\8\4\f\-\4\4\6\2\-\b\a\9\6\-\5\b\7\c\a\c\f\a\5\e\2\3 ]] 00:14:17.004 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.004 13:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:17.264 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e7d2b05f-c43f-4890-9498-fe4646f8db3e 00:14:17.264 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:17.264 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E7D2B05FC43F48909498FE4646F8DB3E 00:14:17.264 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:17.264 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E7D2B05FC43F48909498FE4646F8DB3E 00:14:17.264 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.264 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.265 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.265 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.265 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.265 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.265 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:17.265 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:17.265 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E7D2B05FC43F48909498FE4646F8DB3E 00:14:17.531 [2024-11-06 13:10:59.237621] bdev.c:8340:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:17.531 [2024-11-06 13:10:59.237651] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:17.531 [2024-11-06 13:10:59.237658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.531 request: 00:14:17.531 { 00:14:17.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.531 "namespace": { 00:14:17.531 "bdev_name": "invalid", 00:14:17.531 "nsid": 1, 00:14:17.531 "nguid": "E7D2B05FC43F48909498FE4646F8DB3E", 00:14:17.531 "no_auto_visible": false 00:14:17.531 }, 00:14:17.531 "method": "nvmf_subsystem_add_ns", 00:14:17.531 "req_id": 1 00:14:17.531 } 00:14:17.531 Got JSON-RPC error response 00:14:17.531 response: 00:14:17.531 { 00:14:17.531 "code": -32602, 00:14:17.531 "message": "Invalid parameters" 00:14:17.531 } 00:14:17.531 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:17.531 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:17.531 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:17.531 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:17.531 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e7d2b05f-c43f-4890-9498-fe4646f8db3e 00:14:17.531 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:17.531 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E7D2B05FC43F48909498FE4646F8DB3E -i 00:14:17.813 13:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1668063 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1668063 ']' 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1668063 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:19.788 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1668063 00:14:20.048 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:20.048 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:20.048 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1668063' 00:14:20.048 killing process with pid 1668063 00:14:20.048 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1668063 00:14:20.048 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1668063 00:14:20.048 13:11:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:20.308 rmmod nvme_tcp 00:14:20.308 rmmod nvme_fabrics 00:14:20.308 rmmod nvme_keyring 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1665676 ']' 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1665676 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1665676 ']' 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1665676 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:20.308 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1665676 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1665676' 00:14:20.569 killing process with pid 1665676 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1665676 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1665676 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.569 13:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:23.112 00:14:23.112 real 0m28.250s 00:14:23.112 user 0m32.069s 00:14:23.112 sys 0m8.337s 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.112 ************************************ 00:14:23.112 END TEST nvmf_ns_masking 00:14:23.112 ************************************ 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.112 ************************************ 00:14:23.112 START TEST nvmf_nvme_cli 00:14:23.112 ************************************ 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:23.112 * Looking for test storage... 00:14:23.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:23.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.112 --rc genhtml_branch_coverage=1 00:14:23.112 --rc genhtml_function_coverage=1 00:14:23.112 --rc genhtml_legend=1 00:14:23.112 --rc geninfo_all_blocks=1 00:14:23.112 --rc geninfo_unexecuted_blocks=1 00:14:23.112 00:14:23.112 ' 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:23.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.112 --rc genhtml_branch_coverage=1 00:14:23.112 --rc genhtml_function_coverage=1 00:14:23.112 --rc genhtml_legend=1 00:14:23.112 --rc geninfo_all_blocks=1 00:14:23.112 --rc geninfo_unexecuted_blocks=1 00:14:23.112 00:14:23.112 ' 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:23.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.112 --rc genhtml_branch_coverage=1 00:14:23.112 --rc genhtml_function_coverage=1 00:14:23.112 --rc genhtml_legend=1 00:14:23.112 --rc geninfo_all_blocks=1 00:14:23.112 --rc geninfo_unexecuted_blocks=1 00:14:23.112 00:14:23.112 ' 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:23.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.112 --rc genhtml_branch_coverage=1 00:14:23.112 --rc genhtml_function_coverage=1 00:14:23.112 --rc genhtml_legend=1 00:14:23.112 --rc geninfo_all_blocks=1 00:14:23.112 --rc geninfo_unexecuted_blocks=1 00:14:23.112 00:14:23.112 ' 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.112 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:23.113 13:11:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:31.255 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:31.255 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:31.255 Found net devices under 0000:31:00.0: cvl_0_0 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:31.255 Found net devices under 0000:31:00.1: cvl_0_1 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:31.255 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:31.256 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:31.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:14:31.256 00:14:31.256 --- 10.0.0.2 ping statistics --- 00:14:31.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.256 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:14:31.256 00:14:31.256 --- 10.0.0.1 ping statistics --- 00:14:31.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.256 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1673665 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1673665 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 1673665 ']' 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:31.256 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.256 [2024-11-06 13:11:12.414257] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:14:31.256 [2024-11-06 13:11:12.414324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.256 [2024-11-06 13:11:12.516633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.256 [2024-11-06 13:11:12.571667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.256 [2024-11-06 13:11:12.571727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.256 [2024-11-06 13:11:12.571736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.256 [2024-11-06 13:11:12.571744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.256 [2024-11-06 13:11:12.571760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.256 [2024-11-06 13:11:12.573872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.256 [2024-11-06 13:11:12.574161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.256 [2024-11-06 13:11:12.574319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.256 [2024-11-06 13:11:12.574322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.517 [2024-11-06 13:11:13.283063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.517 Malloc0 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.517 Malloc1 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.517 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.518 [2024-11-06 13:11:13.399568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.518 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.518 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.518 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.518 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.518 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.518 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:14:31.778 00:14:31.778 Discovery Log Number of Records 2, Generation counter 2 00:14:31.778 =====Discovery Log Entry 0====== 00:14:31.778 trtype: tcp 00:14:31.778 adrfam: ipv4 00:14:31.778 subtype: current discovery subsystem 00:14:31.778 treq: not required 00:14:31.778 portid: 0 00:14:31.778 trsvcid: 4420 00:14:31.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:31.778 traddr: 10.0.0.2 00:14:31.778 eflags: explicit discovery connections, duplicate discovery information 00:14:31.778 sectype: none 00:14:31.778 =====Discovery Log Entry 1====== 00:14:31.778 trtype: tcp 00:14:31.778 adrfam: ipv4 00:14:31.778 subtype: nvme subsystem 00:14:31.778 treq: not required 00:14:31.778 portid: 0 00:14:31.778 trsvcid: 4420 00:14:31.778 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:31.778 traddr: 10.0.0.2 00:14:31.778 eflags: none 00:14:31.778 sectype: none 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:31.778 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:33.692 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:33.692 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:33.692 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:33.692 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:33.693 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:33.693 13:11:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:35.604 /dev/nvme0n2 ]] 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:35.604 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:35.605 rmmod nvme_tcp 00:14:35.605 rmmod nvme_fabrics 00:14:35.605 rmmod nvme_keyring 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1673665 ']' 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1673665 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 1673665 ']' 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 1673665 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:35.605 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1673665 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1673665' 00:14:35.866 killing process with pid 1673665 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 1673665 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 1673665 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.866 13:11:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:38.411 00:14:38.411 real 0m15.250s 00:14:38.411 user 0m22.610s 00:14:38.411 sys 0m6.469s 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.411 ************************************ 00:14:38.411 END TEST nvmf_nvme_cli 00:14:38.411 ************************************ 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.411 ************************************ 00:14:38.411 START TEST nvmf_vfio_user 00:14:38.411 ************************************ 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:38.411 * Looking for test storage... 00:14:38.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:38.411 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.411 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.412 --rc genhtml_branch_coverage=1 00:14:38.412 --rc genhtml_function_coverage=1 00:14:38.412 --rc genhtml_legend=1 00:14:38.412 --rc geninfo_all_blocks=1 00:14:38.412 --rc geninfo_unexecuted_blocks=1 00:14:38.412 00:14:38.412 ' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.412 --rc genhtml_branch_coverage=1 00:14:38.412 --rc genhtml_function_coverage=1 00:14:38.412 --rc genhtml_legend=1 00:14:38.412 --rc geninfo_all_blocks=1 00:14:38.412 --rc geninfo_unexecuted_blocks=1 00:14:38.412 00:14:38.412 ' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.412 --rc genhtml_branch_coverage=1 00:14:38.412 --rc genhtml_function_coverage=1 00:14:38.412 --rc genhtml_legend=1 00:14:38.412 --rc geninfo_all_blocks=1 00:14:38.412 --rc geninfo_unexecuted_blocks=1 00:14:38.412 00:14:38.412 ' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.412 --rc genhtml_branch_coverage=1 00:14:38.412 --rc genhtml_function_coverage=1 00:14:38.412 --rc genhtml_legend=1 00:14:38.412 --rc geninfo_all_blocks=1 00:14:38.412 --rc geninfo_unexecuted_blocks=1 00:14:38.412 00:14:38.412 ' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1675300 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1675300' 00:14:38.412 Process pid: 1675300 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1675300 00:14:38.412 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1675300 ']' 00:14:38.413 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.413 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:38.413 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.413 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:38.413 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:38.413 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:38.413 [2024-11-06 13:11:20.127596] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:14:38.413 [2024-11-06 13:11:20.127665] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.413 [2024-11-06 13:11:20.216539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.413 [2024-11-06 13:11:20.255833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.413 [2024-11-06 13:11:20.255877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.413 [2024-11-06 13:11:20.255883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.413 [2024-11-06 13:11:20.255888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.413 [2024-11-06 13:11:20.255893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.413 [2024-11-06 13:11:20.257376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.413 [2024-11-06 13:11:20.257531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.413 [2024-11-06 13:11:20.257653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.413 [2024-11-06 13:11:20.257655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.357 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:39.357 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:39.357 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:40.298 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:40.298 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:40.298 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:40.298 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:40.299 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:40.299 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:40.560 Malloc1 00:14:40.560 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:40.821 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:40.821 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:41.083 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:41.083 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:41.083 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:41.343 Malloc2 00:14:41.343 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:41.604 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:41.604 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:41.867 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:41.867 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:41.867 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:41.867 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:41.867 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:41.867 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:41.867 [2024-11-06 13:11:23.646849] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:14:41.867 [2024-11-06 13:11:23.646894] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1675997 ] 00:14:41.867 [2024-11-06 13:11:23.686104] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:41.867 [2024-11-06 13:11:23.691373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:41.867 [2024-11-06 13:11:23.691390] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f07bb6b3000 00:14:41.867 [2024-11-06 13:11:23.692373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:41.867 [2024-11-06 13:11:23.693375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:41.867 [2024-11-06 13:11:23.694379] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:41.867 [2024-11-06 13:11:23.695393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:41.867 [2024-11-06 13:11:23.696395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:41.867 [2024-11-06 13:11:23.697400] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:41.867 [2024-11-06 13:11:23.698409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:41.867 [2024-11-06 13:11:23.699414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:41.867 [2024-11-06 13:11:23.700427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:41.867 [2024-11-06 13:11:23.700434] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f07bb6a8000 00:14:41.867 [2024-11-06 13:11:23.701350] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:41.867 [2024-11-06 13:11:23.710789] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:41.867 [2024-11-06 13:11:23.710812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:41.867 [2024-11-06 13:11:23.716529] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:41.867 [2024-11-06 13:11:23.716566] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:41.867 [2024-11-06 13:11:23.716629] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:41.867 [2024-11-06 13:11:23.716644] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:41.867 [2024-11-06 13:11:23.716648] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:41.867 [2024-11-06 13:11:23.717523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:41.867 [2024-11-06 13:11:23.717530] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:41.867 [2024-11-06 13:11:23.717535] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:41.867 [2024-11-06 13:11:23.718531] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:41.867 [2024-11-06 13:11:23.718537] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:41.867 [2024-11-06 13:11:23.718542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:41.867 [2024-11-06 13:11:23.719538] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:41.867 [2024-11-06 13:11:23.719544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:41.867 [2024-11-06 13:11:23.720545] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:41.867 [2024-11-06 13:11:23.720550] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:41.867 [2024-11-06 13:11:23.720554] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:41.867 [2024-11-06 13:11:23.720559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:41.867 [2024-11-06 13:11:23.720665] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:41.867 [2024-11-06 13:11:23.720668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:41.867 [2024-11-06 13:11:23.720672] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:41.867 [2024-11-06 13:11:23.721556] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:41.867 [2024-11-06 13:11:23.722561] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:41.867 [2024-11-06 13:11:23.723565] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:41.867 [2024-11-06 13:11:23.724560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.867 [2024-11-06 13:11:23.724625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:41.867 [2024-11-06 13:11:23.725571] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:41.867 [2024-11-06 13:11:23.725577] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:41.867 [2024-11-06 13:11:23.725580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:41.867 [2024-11-06 13:11:23.725595] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:41.867 [2024-11-06 13:11:23.725600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:41.867 [2024-11-06 13:11:23.725613] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:41.867 [2024-11-06 13:11:23.725616] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:41.867 [2024-11-06 13:11:23.725619] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.867 [2024-11-06 13:11:23.725631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:41.867 [2024-11-06 13:11:23.725676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:41.867 [2024-11-06 13:11:23.725684] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:41.867 [2024-11-06 13:11:23.725688] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:41.867 [2024-11-06 13:11:23.725691] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:41.867 [2024-11-06 13:11:23.725694] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:41.867 [2024-11-06 13:11:23.725701] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:41.867 [2024-11-06 13:11:23.725704] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:41.867 [2024-11-06 13:11:23.725707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:41.867 [2024-11-06 13:11:23.725715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:41.867 [2024-11-06 13:11:23.725722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:41.867 [2024-11-06 13:11:23.725732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:41.867 [2024-11-06 13:11:23.725741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.867 [2024-11-06 13:11:23.725749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.867 [2024-11-06 13:11:23.725757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.867 [2024-11-06 13:11:23.725763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:41.867 [2024-11-06 13:11:23.725767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:41.867 [2024-11-06 13:11:23.725772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:41.867 [2024-11-06 13:11:23.725779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:41.867 [2024-11-06 13:11:23.725788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.725794] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:41.868 [2024-11-06 13:11:23.725798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.725803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.725807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.725813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.725826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.725870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.725876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.725882] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:41.868 [2024-11-06 13:11:23.725885] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:41.868 [2024-11-06 13:11:23.725887] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.868 [2024-11-06 13:11:23.725892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.725900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.725908] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:41.868 [2024-11-06 13:11:23.725918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.725924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.725929] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:41.868 [2024-11-06 13:11:23.725932] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:41.868 [2024-11-06 13:11:23.725934] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.868 [2024-11-06 13:11:23.725939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.725954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.725964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.725970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.725975] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:41.868 [2024-11-06 13:11:23.725978] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:41.868 [2024-11-06 13:11:23.725981] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.868 [2024-11-06 13:11:23.725985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.725996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.726003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.726008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.726014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.726019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.726022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.726026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.726030] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:41.868 [2024-11-06 13:11:23.726033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:41.868 [2024-11-06 13:11:23.726037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:41.868 [2024-11-06 13:11:23.726051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.726062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.726070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.726079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.726087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.726097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.726105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.726111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.726123] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:41.868 [2024-11-06 13:11:23.726127] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:41.868 [2024-11-06 13:11:23.726129] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:41.868 [2024-11-06 13:11:23.726132] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:41.868 [2024-11-06 13:11:23.726134] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:41.868 [2024-11-06 13:11:23.726139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:41.868 [2024-11-06 13:11:23.726145] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:41.868 [2024-11-06 13:11:23.726148] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:41.868 [2024-11-06 13:11:23.726150] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.868 [2024-11-06 13:11:23.726154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.726160] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:41.868 [2024-11-06 13:11:23.726163] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:41.868 [2024-11-06 13:11:23.726165] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.868 [2024-11-06 13:11:23.726169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.726175] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:41.868 [2024-11-06 13:11:23.726178] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:41.868 [2024-11-06 13:11:23.726181] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:41.868 [2024-11-06 13:11:23.726185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:41.868 [2024-11-06 13:11:23.726190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.726199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.726208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:41.868 [2024-11-06 13:11:23.726213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:41.868 ===================================================== 00:14:41.868 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:41.868 ===================================================== 00:14:41.868 Controller Capabilities/Features 00:14:41.868 ================================ 00:14:41.868 Vendor ID: 4e58 00:14:41.868 Subsystem Vendor ID: 4e58 00:14:41.868 Serial Number: SPDK1 00:14:41.868 Model Number: SPDK bdev Controller 00:14:41.868 Firmware Version: 25.01 00:14:41.868 Recommended Arb Burst: 6 00:14:41.868 IEEE OUI Identifier: 8d 6b 50 00:14:41.868 Multi-path I/O 00:14:41.868 May have multiple subsystem ports: Yes 00:14:41.868 May have multiple controllers: Yes 00:14:41.868 Associated with SR-IOV VF: No 00:14:41.868 Max Data Transfer Size: 131072 00:14:41.868 Max Number of Namespaces: 32 00:14:41.868 Max Number of I/O Queues: 127 00:14:41.868 NVMe Specification Version (VS): 1.3 00:14:41.869 NVMe Specification Version (Identify): 1.3 00:14:41.869 Maximum Queue Entries: 256 00:14:41.869 Contiguous Queues Required: Yes 00:14:41.869 Arbitration Mechanisms Supported 00:14:41.869 Weighted Round Robin: Not Supported 00:14:41.869 Vendor Specific: Not Supported 00:14:41.869 Reset Timeout: 15000 ms 00:14:41.869 Doorbell Stride: 4 bytes 00:14:41.869 NVM Subsystem Reset: Not Supported 00:14:41.869 Command Sets Supported 00:14:41.869 NVM Command Set: Supported 00:14:41.869 Boot Partition: Not Supported 00:14:41.869 Memory Page Size Minimum: 4096 bytes 00:14:41.869 Memory Page Size Maximum: 4096 bytes 00:14:41.869 Persistent Memory Region: Not Supported 00:14:41.869 Optional Asynchronous Events Supported 00:14:41.869 Namespace Attribute Notices: Supported 00:14:41.869 Firmware Activation Notices: Not Supported 00:14:41.869 ANA Change Notices: Not Supported 00:14:41.869 PLE Aggregate Log Change Notices: Not Supported 00:14:41.869 LBA Status Info Alert Notices: Not Supported 00:14:41.869 EGE Aggregate Log Change Notices: Not Supported 00:14:41.869 Normal NVM Subsystem Shutdown event: Not Supported 00:14:41.869 Zone Descriptor Change Notices: Not Supported 00:14:41.869 Discovery Log Change Notices: Not Supported 00:14:41.869 Controller Attributes 00:14:41.869 128-bit Host Identifier: Supported 00:14:41.869 Non-Operational Permissive Mode: Not Supported 00:14:41.869 NVM Sets: Not Supported 00:14:41.869 Read Recovery Levels: Not Supported 00:14:41.869 Endurance Groups: Not Supported 00:14:41.869 Predictable Latency Mode: Not Supported 00:14:41.869 Traffic Based Keep ALive: Not Supported 00:14:41.869 Namespace Granularity: Not Supported 00:14:41.869 SQ Associations: Not Supported 00:14:41.869 UUID List: Not Supported 00:14:41.869 Multi-Domain Subsystem: Not Supported 00:14:41.869 Fixed Capacity Management: Not Supported 00:14:41.869 Variable Capacity Management: Not Supported 00:14:41.869 Delete Endurance Group: Not Supported 00:14:41.869 Delete NVM Set: Not Supported 00:14:41.869 Extended LBA Formats Supported: Not Supported 00:14:41.869 Flexible Data Placement Supported: Not Supported 00:14:41.869 00:14:41.869 Controller Memory Buffer Support 00:14:41.869 ================================ 00:14:41.869 Supported: No 00:14:41.869 00:14:41.869 Persistent Memory Region Support 00:14:41.869 ================================ 00:14:41.869 Supported: No 00:14:41.869 00:14:41.869 Admin Command Set Attributes 00:14:41.869 ============================ 00:14:41.869 Security Send/Receive: Not Supported 00:14:41.869 Format NVM: Not Supported 00:14:41.869 Firmware Activate/Download: Not Supported 00:14:41.869 Namespace Management: Not Supported 00:14:41.869 Device Self-Test: Not Supported 00:14:41.869 Directives: Not Supported 00:14:41.869 NVMe-MI: Not Supported 00:14:41.869 Virtualization Management: Not Supported 00:14:41.869 Doorbell Buffer Config: Not Supported 00:14:41.869 Get LBA Status Capability: Not Supported 00:14:41.869 Command & Feature Lockdown Capability: Not Supported 00:14:41.869 Abort Command Limit: 4 00:14:41.869 Async Event Request Limit: 4 00:14:41.869 Number of Firmware Slots: N/A 00:14:41.869 Firmware Slot 1 Read-Only: N/A 00:14:41.869 Firmware Activation Without Reset: N/A 00:14:41.869 Multiple Update Detection Support: N/A 00:14:41.869 Firmware Update Granularity: No Information Provided 00:14:41.869 Per-Namespace SMART Log: No 00:14:41.869 Asymmetric Namespace Access Log Page: Not Supported 00:14:41.869 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:41.869 Command Effects Log Page: Supported 00:14:41.869 Get Log Page Extended Data: Supported 00:14:41.869 Telemetry Log Pages: Not Supported 00:14:41.869 Persistent Event Log Pages: Not Supported 00:14:41.869 Supported Log Pages Log Page: May Support 00:14:41.869 Commands Supported & Effects Log Page: Not Supported 00:14:41.869 Feature Identifiers & Effects Log Page:May Support 00:14:41.869 NVMe-MI Commands & Effects Log Page: May Support 00:14:41.869 Data Area 4 for Telemetry Log: Not Supported 00:14:41.869 Error Log Page Entries Supported: 128 00:14:41.869 Keep Alive: Supported 00:14:41.869 Keep Alive Granularity: 10000 ms 00:14:41.869 00:14:41.869 NVM Command Set Attributes 00:14:41.869 ========================== 00:14:41.869 Submission Queue Entry Size 00:14:41.869 Max: 64 00:14:41.869 Min: 64 00:14:41.869 Completion Queue Entry Size 00:14:41.869 Max: 16 00:14:41.869 Min: 16 00:14:41.869 Number of Namespaces: 32 00:14:41.869 Compare Command: Supported 00:14:41.869 Write Uncorrectable Command: Not Supported 00:14:41.869 Dataset Management Command: Supported 00:14:41.869 Write Zeroes Command: Supported 00:14:41.869 Set Features Save Field: Not Supported 00:14:41.869 Reservations: Not Supported 00:14:41.869 Timestamp: Not Supported 00:14:41.869 Copy: Supported 00:14:41.869 Volatile Write Cache: Present 00:14:41.869 Atomic Write Unit (Normal): 1 00:14:41.869 Atomic Write Unit (PFail): 1 00:14:41.869 Atomic Compare & Write Unit: 1 00:14:41.869 Fused Compare & Write: Supported 00:14:41.869 Scatter-Gather List 00:14:41.869 SGL Command Set: Supported (Dword aligned) 00:14:41.869 SGL Keyed: Not Supported 00:14:41.869 SGL Bit Bucket Descriptor: Not Supported 00:14:41.869 SGL Metadata Pointer: Not Supported 00:14:41.869 Oversized SGL: Not Supported 00:14:41.869 SGL Metadata Address: Not Supported 00:14:41.869 SGL Offset: Not Supported 00:14:41.869 Transport SGL Data Block: Not Supported 00:14:41.869 Replay Protected Memory Block: Not Supported 00:14:41.869 00:14:41.869 Firmware Slot Information 00:14:41.869 ========================= 00:14:41.869 Active slot: 1 00:14:41.869 Slot 1 Firmware Revision: 25.01 00:14:41.869 00:14:41.869 00:14:41.869 Commands Supported and Effects 00:14:41.869 ============================== 00:14:41.869 Admin Commands 00:14:41.869 -------------- 00:14:41.869 Get Log Page (02h): Supported 00:14:41.869 Identify (06h): Supported 00:14:41.869 Abort (08h): Supported 00:14:41.869 Set Features (09h): Supported 00:14:41.869 Get Features (0Ah): Supported 00:14:41.869 Asynchronous Event Request (0Ch): Supported 00:14:41.869 Keep Alive (18h): Supported 00:14:41.869 I/O Commands 00:14:41.869 ------------ 00:14:41.869 Flush (00h): Supported LBA-Change 00:14:41.869 Write (01h): Supported LBA-Change 00:14:41.869 Read (02h): Supported 00:14:41.869 Compare (05h): Supported 00:14:41.869 Write Zeroes (08h): Supported LBA-Change 00:14:41.869 Dataset Management (09h): Supported LBA-Change 00:14:41.869 Copy (19h): Supported LBA-Change 00:14:41.869 00:14:41.869 Error Log 00:14:41.869 ========= 00:14:41.869 00:14:41.869 Arbitration 00:14:41.869 =========== 00:14:41.869 Arbitration Burst: 1 00:14:41.869 00:14:41.869 Power Management 00:14:41.869 ================ 00:14:41.869 Number of Power States: 1 00:14:41.869 Current Power State: Power State #0 00:14:41.869 Power State #0: 00:14:41.869 Max Power: 0.00 W 00:14:41.869 Non-Operational State: Operational 00:14:41.869 Entry Latency: Not Reported 00:14:41.869 Exit Latency: Not Reported 00:14:41.869 Relative Read Throughput: 0 00:14:41.869 Relative Read Latency: 0 00:14:41.869 Relative Write Throughput: 0 00:14:41.869 Relative Write Latency: 0 00:14:41.869 Idle Power: Not Reported 00:14:41.869 Active Power: Not Reported 00:14:41.869 Non-Operational Permissive Mode: Not Supported 00:14:41.869 00:14:41.869 Health Information 00:14:41.869 ================== 00:14:41.869 Critical Warnings: 00:14:41.869 Available Spare Space: OK 00:14:41.869 Temperature: OK 00:14:41.869 Device Reliability: OK 00:14:41.869 Read Only: No 00:14:41.869 Volatile Memory Backup: OK 00:14:41.869 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:41.869 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:41.869 Available Spare: 0% 00:14:41.869 Available Sp[2024-11-06 13:11:23.726332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:41.869 [2024-11-06 13:11:23.726340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:41.869 [2024-11-06 13:11:23.726363] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:41.869 [2024-11-06 13:11:23.726370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.869 [2024-11-06 13:11:23.726375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.870 [2024-11-06 13:11:23.726379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.870 [2024-11-06 13:11:23.726385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:41.870 [2024-11-06 13:11:23.729751] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:41.870 [2024-11-06 13:11:23.729759] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:41.870 [2024-11-06 13:11:23.730584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.870 [2024-11-06 13:11:23.730624] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:41.870 [2024-11-06 13:11:23.730629] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:41.870 [2024-11-06 13:11:23.731589] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:41.870 [2024-11-06 13:11:23.731598] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:41.870 [2024-11-06 13:11:23.731654] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:41.870 [2024-11-06 13:11:23.732618] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:41.870 are Threshold: 0% 00:14:41.870 Life Percentage Used: 0% 00:14:41.870 Data Units Read: 0 00:14:41.870 Data Units Written: 0 00:14:41.870 Host Read Commands: 0 00:14:41.870 Host Write Commands: 0 00:14:41.870 Controller Busy Time: 0 minutes 00:14:41.870 Power Cycles: 0 00:14:41.870 Power On Hours: 0 hours 00:14:41.870 Unsafe Shutdowns: 0 00:14:41.870 Unrecoverable Media Errors: 0 00:14:41.870 Lifetime Error Log Entries: 0 00:14:41.870 Warning Temperature Time: 0 minutes 00:14:41.870 Critical Temperature Time: 0 minutes 00:14:41.870 00:14:41.870 Number of Queues 00:14:41.870 ================ 00:14:41.870 Number of I/O Submission Queues: 127 00:14:41.870 Number of I/O Completion Queues: 127 00:14:41.870 00:14:41.870 Active Namespaces 00:14:41.870 ================= 00:14:41.870 Namespace ID:1 00:14:41.870 Error Recovery Timeout: Unlimited 00:14:41.870 Command Set Identifier: NVM (00h) 00:14:41.870 Deallocate: Supported 00:14:41.870 Deallocated/Unwritten Error: Not Supported 00:14:41.870 Deallocated Read Value: Unknown 00:14:41.870 Deallocate in Write Zeroes: Not Supported 00:14:41.870 Deallocated Guard Field: 0xFFFF 00:14:41.870 Flush: Supported 00:14:41.870 Reservation: Supported 00:14:41.870 Namespace Sharing Capabilities: Multiple Controllers 00:14:41.870 Size (in LBAs): 131072 (0GiB) 00:14:41.870 Capacity (in LBAs): 131072 (0GiB) 00:14:41.870 Utilization (in LBAs): 131072 (0GiB) 00:14:41.870 NGUID: EDF6474C3544466E908478E67C70D1C1 00:14:41.870 UUID: edf6474c-3544-466e-9084-78e67c70d1c1 00:14:41.870 Thin Provisioning: Not Supported 00:14:41.870 Per-NS Atomic Units: Yes 00:14:41.870 Atomic Boundary Size (Normal): 0 00:14:41.870 Atomic Boundary Size (PFail): 0 00:14:41.870 Atomic Boundary Offset: 0 00:14:41.870 Maximum Single Source Range Length: 65535 00:14:41.870 Maximum Copy Length: 65535 00:14:41.870 Maximum Source Range Count: 1 00:14:41.870 NGUID/EUI64 Never Reused: No 00:14:41.870 Namespace Write Protected: No 00:14:41.870 Number of LBA Formats: 1 00:14:41.870 Current LBA Format: LBA Format #00 00:14:41.870 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:41.870 00:14:41.870 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:42.131 [2024-11-06 13:11:23.920431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:47.423 Initializing NVMe Controllers 00:14:47.423 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:47.423 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:47.423 Initialization complete. Launching workers. 00:14:47.423 ======================================================== 00:14:47.423 Latency(us) 00:14:47.423 Device Information : IOPS MiB/s Average min max 00:14:47.423 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39985.77 156.19 3200.81 840.70 8750.53 00:14:47.423 ======================================================== 00:14:47.423 Total : 39985.77 156.19 3200.81 840.70 8750.53 00:14:47.423 00:14:47.423 [2024-11-06 13:11:28.936717] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:47.423 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:47.423 [2024-11-06 13:11:29.131595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.705 Initializing NVMe Controllers 00:14:52.705 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:52.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:52.705 Initialization complete. Launching workers. 00:14:52.705 ======================================================== 00:14:52.705 Latency(us) 00:14:52.705 Device Information : IOPS MiB/s Average min max 00:14:52.705 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7972.55 4988.47 9979.27 00:14:52.705 ======================================================== 00:14:52.705 Total : 16076.80 62.80 7972.55 4988.47 9979.27 00:14:52.705 00:14:52.705 [2024-11-06 13:11:34.169180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.705 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:52.705 [2024-11-06 13:11:34.372011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.986 [2024-11-06 13:11:39.460035] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.986 Initializing NVMe Controllers 00:14:57.986 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.986 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.986 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:57.986 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:57.986 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:57.986 Initialization complete. Launching workers. 00:14:57.986 Starting thread on core 2 00:14:57.986 Starting thread on core 3 00:14:57.986 Starting thread on core 1 00:14:57.986 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:57.986 [2024-11-06 13:11:39.710100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.287 [2024-11-06 13:11:42.774937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.287 Initializing NVMe Controllers 00:15:01.287 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.287 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.287 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:01.287 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:01.287 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:01.287 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:01.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:01.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:01.287 Initialization complete. Launching workers. 00:15:01.287 Starting thread on core 1 with urgent priority queue 00:15:01.287 Starting thread on core 2 with urgent priority queue 00:15:01.287 Starting thread on core 3 with urgent priority queue 00:15:01.287 Starting thread on core 0 with urgent priority queue 00:15:01.287 SPDK bdev Controller (SPDK1 ) core 0: 11642.00 IO/s 8.59 secs/100000 ios 00:15:01.287 SPDK bdev Controller (SPDK1 ) core 1: 8115.00 IO/s 12.32 secs/100000 ios 00:15:01.287 SPDK bdev Controller (SPDK1 ) core 2: 11939.33 IO/s 8.38 secs/100000 ios 00:15:01.287 SPDK bdev Controller (SPDK1 ) core 3: 10053.67 IO/s 9.95 secs/100000 ios 00:15:01.287 ======================================================== 00:15:01.287 00:15:01.287 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:01.287 [2024-11-06 13:11:43.014325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.287 Initializing NVMe Controllers 00:15:01.287 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.287 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.287 Namespace ID: 1 size: 0GB 00:15:01.287 Initialization complete. 00:15:01.287 INFO: using host memory buffer for IO 00:15:01.287 Hello world! 00:15:01.287 [2024-11-06 13:11:43.048523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.287 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:01.598 [2024-11-06 13:11:43.289092] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.547 Initializing NVMe Controllers 00:15:02.547 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:02.547 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:02.547 Initialization complete. Launching workers. 00:15:02.547 submit (in ns) avg, min, max = 5301.9, 2843.3, 3998675.8 00:15:02.547 complete (in ns) avg, min, max = 17458.6, 1636.7, 3997823.3 00:15:02.547 00:15:02.547 Submit histogram 00:15:02.547 ================ 00:15:02.547 Range in us Cumulative Count 00:15:02.547 2.840 - 2.853: 0.3566% ( 74) 00:15:02.547 2.853 - 2.867: 1.7783% ( 295) 00:15:02.547 2.867 - 2.880: 4.6169% ( 589) 00:15:02.547 2.880 - 2.893: 9.6867% ( 1052) 00:15:02.547 2.893 - 2.907: 15.5277% ( 1212) 00:15:02.547 2.907 - 2.920: 21.7060% ( 1282) 00:15:02.547 2.920 - 2.933: 27.2193% ( 1144) 00:15:02.547 2.933 - 2.947: 32.5783% ( 1112) 00:15:02.547 2.947 - 2.960: 37.7494% ( 1073) 00:15:02.547 2.960 - 2.973: 43.0024% ( 1090) 00:15:02.547 2.973 - 2.987: 47.7831% ( 992) 00:15:02.547 2.987 - 3.000: 54.3759% ( 1368) 00:15:02.547 3.000 - 3.013: 62.9783% ( 1785) 00:15:02.547 3.013 - 3.027: 71.9904% ( 1870) 00:15:02.547 3.027 - 3.040: 79.5518% ( 1569) 00:15:02.547 3.040 - 3.053: 86.0145% ( 1341) 00:15:02.547 3.053 - 3.067: 91.3398% ( 1105) 00:15:02.547 3.067 - 3.080: 94.9831% ( 756) 00:15:02.547 3.080 - 3.093: 97.2771% ( 476) 00:15:02.547 3.093 - 3.107: 98.4530% ( 244) 00:15:02.547 3.107 - 3.120: 99.0313% ( 120) 00:15:02.547 3.120 - 3.133: 99.3639% ( 69) 00:15:02.547 3.133 - 3.147: 99.4988% ( 28) 00:15:02.547 3.147 - 3.160: 99.5325% ( 7) 00:15:02.547 3.160 - 3.173: 99.5807% ( 10) 00:15:02.547 3.173 - 3.187: 99.5855% ( 1) 00:15:02.547 3.187 - 3.200: 99.5952% ( 2) 00:15:02.547 3.240 - 3.253: 99.6000% ( 1) 00:15:02.547 3.267 - 3.280: 99.6048% ( 1) 00:15:02.547 3.307 - 3.320: 99.6096% ( 1) 00:15:02.547 3.320 - 3.333: 99.6145% ( 1) 00:15:02.547 3.400 - 3.413: 99.6193% ( 1) 00:15:02.547 3.653 - 3.680: 99.6289% ( 2) 00:15:02.547 4.027 - 4.053: 99.6337% ( 1) 00:15:02.547 4.347 - 4.373: 99.6386% ( 1) 00:15:02.547 4.373 - 4.400: 99.6434% ( 1) 00:15:02.547 4.453 - 4.480: 99.6482% ( 1) 00:15:02.547 4.480 - 4.507: 99.6530% ( 1) 00:15:02.547 4.533 - 4.560: 99.6578% ( 1) 00:15:02.547 4.560 - 4.587: 99.6627% ( 1) 00:15:02.547 4.613 - 4.640: 99.6771% ( 3) 00:15:02.547 4.640 - 4.667: 99.6819% ( 1) 00:15:02.547 4.667 - 4.693: 99.6916% ( 2) 00:15:02.547 4.693 - 4.720: 99.7012% ( 2) 00:15:02.547 4.720 - 4.747: 99.7060% ( 1) 00:15:02.547 4.800 - 4.827: 99.7157% ( 2) 00:15:02.547 4.827 - 4.853: 99.7253% ( 2) 00:15:02.547 4.933 - 4.960: 99.7446% ( 4) 00:15:02.548 4.960 - 4.987: 99.7494% ( 1) 00:15:02.548 5.013 - 5.040: 99.7590% ( 2) 00:15:02.548 5.040 - 5.067: 99.7639% ( 1) 00:15:02.548 5.093 - 5.120: 99.7687% ( 1) 00:15:02.548 5.120 - 5.147: 99.7735% ( 1) 00:15:02.548 5.147 - 5.173: 99.7783% ( 1) 00:15:02.548 5.173 - 5.200: 99.7831% ( 1) 00:15:02.548 5.360 - 5.387: 99.7928% ( 2) 00:15:02.548 5.387 - 5.413: 99.8120% ( 4) 00:15:02.548 5.413 - 5.440: 99.8217% ( 2) 00:15:02.548 5.440 - 5.467: 99.8265% ( 1) 00:15:02.548 5.493 - 5.520: 99.8361% ( 2) 00:15:02.548 5.573 - 5.600: 99.8410% ( 1) 00:15:02.548 5.600 - 5.627: 99.8458% ( 1) 00:15:02.548 5.707 - 5.733: 99.8506% ( 1) 00:15:02.548 5.733 - 5.760: 99.8554% ( 1) 00:15:02.548 5.813 - 5.840: 99.8602% ( 1) 00:15:02.548 5.840 - 5.867: 99.8651% ( 1) 00:15:02.548 5.893 - 5.920: 99.8747% ( 2) 00:15:02.548 5.947 - 5.973: 99.8843% ( 2) 00:15:02.548 6.000 - 6.027: 99.8892% ( 1) 00:15:02.548 6.027 - 6.053: 99.8940% ( 1) 00:15:02.548 6.133 - 6.160: 99.8988% ( 1) 00:15:02.548 6.187 - 6.213: 99.9036% ( 1) 00:15:02.548 6.267 - 6.293: 99.9084% ( 1) 00:15:02.548 6.347 - 6.373: 99.9133% ( 1) 00:15:02.548 6.587 - 6.613: 99.9181% ( 1) 00:15:02.548 6.693 - 6.720: 99.9229% ( 1) 00:15:02.548 6.987 - 7.040: 99.9277% ( 1) 00:15:02.548 9.280 - 9.333: 99.9325% ( 1) 00:15:02.548 10.507 - 10.560: 99.9373% ( 1) 00:15:02.548 [2024-11-06 13:11:44.310593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.548 49.493 - 49.707: 99.9422% ( 1) 00:15:02.548 3986.773 - 4014.080: 100.0000% ( 12) 00:15:02.548 00:15:02.548 Complete histogram 00:15:02.548 ================== 00:15:02.548 Range in us Cumulative Count 00:15:02.548 1.633 - 1.640: 0.0048% ( 1) 00:15:02.548 1.640 - 1.647: 0.6410% ( 132) 00:15:02.548 1.647 - 1.653: 0.9542% ( 65) 00:15:02.548 1.653 - 1.660: 1.0120% ( 12) 00:15:02.548 1.660 - 1.667: 1.1325% ( 25) 00:15:02.548 1.667 - 1.673: 1.2000% ( 14) 00:15:02.548 1.673 - 1.680: 1.2145% ( 3) 00:15:02.548 1.680 - 1.687: 1.2289% ( 3) 00:15:02.548 1.687 - 1.693: 2.9157% ( 350) 00:15:02.548 1.693 - 1.700: 40.8337% ( 7868) 00:15:02.548 1.700 - 1.707: 50.6651% ( 2040) 00:15:02.548 1.707 - 1.720: 69.8699% ( 3985) 00:15:02.548 1.720 - 1.733: 79.8024% ( 2061) 00:15:02.548 1.733 - 1.747: 82.7181% ( 605) 00:15:02.548 1.747 - 1.760: 85.8313% ( 646) 00:15:02.548 1.760 - 1.773: 91.5518% ( 1187) 00:15:02.548 1.773 - 1.787: 95.9084% ( 904) 00:15:02.548 1.787 - 1.800: 98.2361% ( 483) 00:15:02.548 1.800 - 1.813: 99.1614% ( 192) 00:15:02.548 1.813 - 1.827: 99.3831% ( 46) 00:15:02.548 1.827 - 1.840: 99.4265% ( 9) 00:15:02.548 1.840 - 1.853: 99.4313% ( 1) 00:15:02.548 3.387 - 3.400: 99.4361% ( 1) 00:15:02.548 3.653 - 3.680: 99.4410% ( 1) 00:15:02.548 3.680 - 3.707: 99.4458% ( 1) 00:15:02.548 3.813 - 3.840: 99.4506% ( 1) 00:15:02.548 3.893 - 3.920: 99.4554% ( 1) 00:15:02.548 3.920 - 3.947: 99.4602% ( 1) 00:15:02.548 3.973 - 4.000: 99.4651% ( 1) 00:15:02.548 4.027 - 4.053: 99.4699% ( 1) 00:15:02.548 4.133 - 4.160: 99.4795% ( 2) 00:15:02.548 4.187 - 4.213: 99.4843% ( 1) 00:15:02.548 4.213 - 4.240: 99.4940% ( 2) 00:15:02.548 4.267 - 4.293: 99.4988% ( 1) 00:15:02.548 4.373 - 4.400: 99.5036% ( 1) 00:15:02.548 4.480 - 4.507: 99.5084% ( 1) 00:15:02.548 4.587 - 4.613: 99.5133% ( 1) 00:15:02.548 4.613 - 4.640: 99.5181% ( 1) 00:15:02.548 4.640 - 4.667: 99.5277% ( 2) 00:15:02.548 4.720 - 4.747: 99.5325% ( 1) 00:15:02.548 4.773 - 4.800: 99.5373% ( 1) 00:15:02.548 4.800 - 4.827: 99.5422% ( 1) 00:15:02.548 4.827 - 4.853: 99.5470% ( 1) 00:15:02.548 4.853 - 4.880: 99.5614% ( 3) 00:15:02.548 4.907 - 4.933: 99.5663% ( 1) 00:15:02.548 5.067 - 5.093: 99.5711% ( 1) 00:15:02.548 5.413 - 5.440: 99.5759% ( 1) 00:15:02.548 5.600 - 5.627: 99.5807% ( 1) 00:15:02.548 5.867 - 5.893: 99.5855% ( 1) 00:15:02.548 5.947 - 5.973: 99.5904% ( 1) 00:15:02.548 7.893 - 7.947: 99.5952% ( 1) 00:15:02.548 10.987 - 11.040: 99.6000% ( 1) 00:15:02.548 12.320 - 12.373: 99.6048% ( 1) 00:15:02.548 2990.080 - 3003.733: 99.6096% ( 1) 00:15:02.548 3986.773 - 4014.080: 100.0000% ( 81) 00:15:02.548 00:15:02.548 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:02.548 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:02.548 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:02.548 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:02.548 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:02.808 [ 00:15:02.808 { 00:15:02.808 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:02.808 "subtype": "Discovery", 00:15:02.808 "listen_addresses": [], 00:15:02.808 "allow_any_host": true, 00:15:02.808 "hosts": [] 00:15:02.808 }, 00:15:02.808 { 00:15:02.808 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:02.808 "subtype": "NVMe", 00:15:02.808 "listen_addresses": [ 00:15:02.808 { 00:15:02.808 "trtype": "VFIOUSER", 00:15:02.808 "adrfam": "IPv4", 00:15:02.808 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:02.808 "trsvcid": "0" 00:15:02.808 } 00:15:02.808 ], 00:15:02.808 "allow_any_host": true, 00:15:02.808 "hosts": [], 00:15:02.808 "serial_number": "SPDK1", 00:15:02.808 "model_number": "SPDK bdev Controller", 00:15:02.808 "max_namespaces": 32, 00:15:02.808 "min_cntlid": 1, 00:15:02.808 "max_cntlid": 65519, 00:15:02.808 "namespaces": [ 00:15:02.808 { 00:15:02.808 "nsid": 1, 00:15:02.808 "bdev_name": "Malloc1", 00:15:02.808 "name": "Malloc1", 00:15:02.808 "nguid": "EDF6474C3544466E908478E67C70D1C1", 00:15:02.808 "uuid": "edf6474c-3544-466e-9084-78e67c70d1c1" 00:15:02.808 } 00:15:02.808 ] 00:15:02.808 }, 00:15:02.808 { 00:15:02.808 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:02.808 "subtype": "NVMe", 00:15:02.808 "listen_addresses": [ 00:15:02.809 { 00:15:02.809 "trtype": "VFIOUSER", 00:15:02.809 "adrfam": "IPv4", 00:15:02.809 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:02.809 "trsvcid": "0" 00:15:02.809 } 00:15:02.809 ], 00:15:02.809 "allow_any_host": true, 00:15:02.809 "hosts": [], 00:15:02.809 "serial_number": "SPDK2", 00:15:02.809 "model_number": "SPDK bdev Controller", 00:15:02.809 "max_namespaces": 32, 00:15:02.809 "min_cntlid": 1, 00:15:02.809 "max_cntlid": 65519, 00:15:02.809 "namespaces": [ 00:15:02.809 { 00:15:02.809 "nsid": 1, 00:15:02.809 "bdev_name": "Malloc2", 00:15:02.809 "name": "Malloc2", 00:15:02.809 "nguid": "5B9E84BEF6DF4983BD149187954BBAA0", 00:15:02.809 "uuid": "5b9e84be-f6df-4983-bd14-9187954bbaa0" 00:15:02.809 } 00:15:02.809 ] 00:15:02.809 } 00:15:02.809 ] 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1680021 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:02.809 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:02.809 [2024-11-06 13:11:44.686157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:03.069 Malloc3 00:15:03.069 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:03.069 [2024-11-06 13:11:44.890551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:03.069 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:03.069 Asynchronous Event Request test 00:15:03.069 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.069 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:03.069 Registering asynchronous event callbacks... 00:15:03.069 Starting namespace attribute notice tests for all controllers... 00:15:03.069 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:03.069 aer_cb - Changed Namespace 00:15:03.069 Cleaning up... 00:15:03.329 [ 00:15:03.329 { 00:15:03.329 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:03.329 "subtype": "Discovery", 00:15:03.329 "listen_addresses": [], 00:15:03.329 "allow_any_host": true, 00:15:03.329 "hosts": [] 00:15:03.329 }, 00:15:03.329 { 00:15:03.329 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:03.329 "subtype": "NVMe", 00:15:03.329 "listen_addresses": [ 00:15:03.329 { 00:15:03.329 "trtype": "VFIOUSER", 00:15:03.329 "adrfam": "IPv4", 00:15:03.329 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:03.329 "trsvcid": "0" 00:15:03.329 } 00:15:03.329 ], 00:15:03.329 "allow_any_host": true, 00:15:03.329 "hosts": [], 00:15:03.329 "serial_number": "SPDK1", 00:15:03.329 "model_number": "SPDK bdev Controller", 00:15:03.329 "max_namespaces": 32, 00:15:03.329 "min_cntlid": 1, 00:15:03.329 "max_cntlid": 65519, 00:15:03.329 "namespaces": [ 00:15:03.329 { 00:15:03.329 "nsid": 1, 00:15:03.329 "bdev_name": "Malloc1", 00:15:03.329 "name": "Malloc1", 00:15:03.329 "nguid": "EDF6474C3544466E908478E67C70D1C1", 00:15:03.329 "uuid": "edf6474c-3544-466e-9084-78e67c70d1c1" 00:15:03.329 }, 00:15:03.329 { 00:15:03.329 "nsid": 2, 00:15:03.329 "bdev_name": "Malloc3", 00:15:03.329 "name": "Malloc3", 00:15:03.329 "nguid": "CDEC18133C104CF0A9F5EEB830288906", 00:15:03.329 "uuid": "cdec1813-3c10-4cf0-a9f5-eeb830288906" 00:15:03.329 } 00:15:03.329 ] 00:15:03.329 }, 00:15:03.329 { 00:15:03.329 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:03.329 "subtype": "NVMe", 00:15:03.329 "listen_addresses": [ 00:15:03.330 { 00:15:03.330 "trtype": "VFIOUSER", 00:15:03.330 "adrfam": "IPv4", 00:15:03.330 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:03.330 "trsvcid": "0" 00:15:03.330 } 00:15:03.330 ], 00:15:03.330 "allow_any_host": true, 00:15:03.330 "hosts": [], 00:15:03.330 "serial_number": "SPDK2", 00:15:03.330 "model_number": "SPDK bdev Controller", 00:15:03.330 "max_namespaces": 32, 00:15:03.330 "min_cntlid": 1, 00:15:03.330 "max_cntlid": 65519, 00:15:03.330 "namespaces": [ 00:15:03.330 { 00:15:03.330 "nsid": 1, 00:15:03.330 "bdev_name": "Malloc2", 00:15:03.330 "name": "Malloc2", 00:15:03.330 "nguid": "5B9E84BEF6DF4983BD149187954BBAA0", 00:15:03.330 "uuid": "5b9e84be-f6df-4983-bd14-9187954bbaa0" 00:15:03.330 } 00:15:03.330 ] 00:15:03.330 } 00:15:03.330 ] 00:15:03.330 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1680021 00:15:03.330 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:03.330 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:03.330 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:03.330 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:03.330 [2024-11-06 13:11:45.116039] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:15:03.330 [2024-11-06 13:11:45.116083] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1680085 ] 00:15:03.330 [2024-11-06 13:11:45.154993] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:03.330 [2024-11-06 13:11:45.160205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:03.330 [2024-11-06 13:11:45.160224] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdbc9125000 00:15:03.330 [2024-11-06 13:11:45.161206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:03.330 [2024-11-06 13:11:45.162214] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:03.330 [2024-11-06 13:11:45.163225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:03.330 [2024-11-06 13:11:45.164230] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:03.330 [2024-11-06 13:11:45.165235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:03.330 [2024-11-06 13:11:45.166243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:03.330 [2024-11-06 13:11:45.167246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:03.330 [2024-11-06 13:11:45.168253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:03.330 [2024-11-06 13:11:45.169257] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:03.330 [2024-11-06 13:11:45.169265] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdbc911a000 00:15:03.330 [2024-11-06 13:11:45.170177] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:03.330 [2024-11-06 13:11:45.179554] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:03.330 [2024-11-06 13:11:45.179575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:03.330 [2024-11-06 13:11:45.184637] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:03.330 [2024-11-06 13:11:45.184674] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:03.330 [2024-11-06 13:11:45.184733] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:03.330 [2024-11-06 13:11:45.184749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:03.330 [2024-11-06 13:11:45.184753] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:03.330 [2024-11-06 13:11:45.185650] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:03.330 [2024-11-06 13:11:45.185661] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:03.330 [2024-11-06 13:11:45.185666] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:03.330 [2024-11-06 13:11:45.186659] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:03.330 [2024-11-06 13:11:45.186665] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:03.330 [2024-11-06 13:11:45.186671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:03.330 [2024-11-06 13:11:45.187668] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:03.330 [2024-11-06 13:11:45.187676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:03.330 [2024-11-06 13:11:45.188671] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:03.330 [2024-11-06 13:11:45.188677] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:03.330 [2024-11-06 13:11:45.188681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:03.330 [2024-11-06 13:11:45.188686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:03.330 [2024-11-06 13:11:45.188792] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:03.330 [2024-11-06 13:11:45.188796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:03.330 [2024-11-06 13:11:45.188799] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:03.330 [2024-11-06 13:11:45.189681] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:03.330 [2024-11-06 13:11:45.190689] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:03.330 [2024-11-06 13:11:45.191696] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:03.330 [2024-11-06 13:11:45.192698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:03.330 [2024-11-06 13:11:45.192729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:03.330 [2024-11-06 13:11:45.193711] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:03.330 [2024-11-06 13:11:45.193719] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:03.330 [2024-11-06 13:11:45.193722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:03.330 [2024-11-06 13:11:45.193738] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:03.330 [2024-11-06 13:11:45.193743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:03.330 [2024-11-06 13:11:45.193757] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:03.330 [2024-11-06 13:11:45.193761] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:03.330 [2024-11-06 13:11:45.193764] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:03.330 [2024-11-06 13:11:45.193775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:03.330 [2024-11-06 13:11:45.201751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:03.330 [2024-11-06 13:11:45.201761] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:03.330 [2024-11-06 13:11:45.201765] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:03.330 [2024-11-06 13:11:45.201768] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:03.330 [2024-11-06 13:11:45.201772] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:03.330 [2024-11-06 13:11:45.201777] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:03.330 [2024-11-06 13:11:45.201780] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:03.330 [2024-11-06 13:11:45.201783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:03.330 [2024-11-06 13:11:45.201791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:03.330 [2024-11-06 13:11:45.201799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:03.330 [2024-11-06 13:11:45.209750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:03.330 [2024-11-06 13:11:45.209759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.330 [2024-11-06 13:11:45.209766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.330 [2024-11-06 13:11:45.209772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.331 [2024-11-06 13:11:45.209778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.331 [2024-11-06 13:11:45.209782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:03.331 [2024-11-06 13:11:45.209787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:03.331 [2024-11-06 13:11:45.209793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:03.331 [2024-11-06 13:11:45.217751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:03.331 [2024-11-06 13:11:45.217759] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:03.331 [2024-11-06 13:11:45.217763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:03.331 [2024-11-06 13:11:45.217769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:03.331 [2024-11-06 13:11:45.217774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:03.331 [2024-11-06 13:11:45.217781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:03.331 [2024-11-06 13:11:45.225760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:03.331 [2024-11-06 13:11:45.225807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:03.331 [2024-11-06 13:11:45.225813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:03.331 [2024-11-06 13:11:45.225818] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:03.331 [2024-11-06 13:11:45.225822] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:03.331 [2024-11-06 13:11:45.225824] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:03.331 [2024-11-06 13:11:45.225829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:03.592 [2024-11-06 13:11:45.233750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:03.592 [2024-11-06 13:11:45.233760] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:03.592 [2024-11-06 13:11:45.233770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.233776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.233781] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:03.592 [2024-11-06 13:11:45.233784] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:03.592 [2024-11-06 13:11:45.233786] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:03.592 [2024-11-06 13:11:45.233790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:03.592 [2024-11-06 13:11:45.241750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:03.592 [2024-11-06 13:11:45.241763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.241769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.241775] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:03.592 [2024-11-06 13:11:45.241779] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:03.592 [2024-11-06 13:11:45.241781] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:03.592 [2024-11-06 13:11:45.241785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:03.592 [2024-11-06 13:11:45.249751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:03.592 [2024-11-06 13:11:45.249760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.249765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.249771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.249775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.249779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.249783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.249787] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:03.592 [2024-11-06 13:11:45.249790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:03.592 [2024-11-06 13:11:45.249794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:03.592 [2024-11-06 13:11:45.249808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:03.592 [2024-11-06 13:11:45.257751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:03.592 [2024-11-06 13:11:45.257762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:03.592 [2024-11-06 13:11:45.265750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:03.592 [2024-11-06 13:11:45.265760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:03.592 [2024-11-06 13:11:45.273752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:03.592 [2024-11-06 13:11:45.273762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:03.593 [2024-11-06 13:11:45.281751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:03.593 [2024-11-06 13:11:45.281763] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:03.593 [2024-11-06 13:11:45.281766] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:03.593 [2024-11-06 13:11:45.281769] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:03.593 [2024-11-06 13:11:45.281772] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:03.593 [2024-11-06 13:11:45.281774] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:03.593 [2024-11-06 13:11:45.281779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:03.593 [2024-11-06 13:11:45.281784] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:03.593 [2024-11-06 13:11:45.281787] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:03.593 [2024-11-06 13:11:45.281790] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:03.593 [2024-11-06 13:11:45.281794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:03.593 [2024-11-06 13:11:45.281801] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:03.593 [2024-11-06 13:11:45.281805] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:03.593 [2024-11-06 13:11:45.281807] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:03.593 [2024-11-06 13:11:45.281811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:03.593 [2024-11-06 13:11:45.281817] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:03.593 [2024-11-06 13:11:45.281820] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:03.593 [2024-11-06 13:11:45.281822] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:03.593 [2024-11-06 13:11:45.281826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:03.593 [2024-11-06 13:11:45.289752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:03.593 [2024-11-06 13:11:45.289762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:03.593 [2024-11-06 13:11:45.289770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:03.593 [2024-11-06 13:11:45.289775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:03.593 ===================================================== 00:15:03.593 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:03.593 ===================================================== 00:15:03.593 Controller Capabilities/Features 00:15:03.593 ================================ 00:15:03.593 Vendor ID: 4e58 00:15:03.593 Subsystem Vendor ID: 4e58 00:15:03.593 Serial Number: SPDK2 00:15:03.593 Model Number: SPDK bdev Controller 00:15:03.593 Firmware Version: 25.01 00:15:03.593 Recommended Arb Burst: 6 00:15:03.593 IEEE OUI Identifier: 8d 6b 50 00:15:03.593 Multi-path I/O 00:15:03.593 May have multiple subsystem ports: Yes 00:15:03.593 May have multiple controllers: Yes 00:15:03.593 Associated with SR-IOV VF: No 00:15:03.593 Max Data Transfer Size: 131072 00:15:03.593 Max Number of Namespaces: 32 00:15:03.593 Max Number of I/O Queues: 127 00:15:03.593 NVMe Specification Version (VS): 1.3 00:15:03.593 NVMe Specification Version (Identify): 1.3 00:15:03.593 Maximum Queue Entries: 256 00:15:03.593 Contiguous Queues Required: Yes 00:15:03.593 Arbitration Mechanisms Supported 00:15:03.593 Weighted Round Robin: Not Supported 00:15:03.593 Vendor Specific: Not Supported 00:15:03.593 Reset Timeout: 15000 ms 00:15:03.593 Doorbell Stride: 4 bytes 00:15:03.593 NVM Subsystem Reset: Not Supported 00:15:03.593 Command Sets Supported 00:15:03.593 NVM Command Set: Supported 00:15:03.593 Boot Partition: Not Supported 00:15:03.593 Memory Page Size Minimum: 4096 bytes 00:15:03.593 Memory Page Size Maximum: 4096 bytes 00:15:03.593 Persistent Memory Region: Not Supported 00:15:03.593 Optional Asynchronous Events Supported 00:15:03.593 Namespace Attribute Notices: Supported 00:15:03.593 Firmware Activation Notices: Not Supported 00:15:03.593 ANA Change Notices: Not Supported 00:15:03.593 PLE Aggregate Log Change Notices: Not Supported 00:15:03.593 LBA Status Info Alert Notices: Not Supported 00:15:03.593 EGE Aggregate Log Change Notices: Not Supported 00:15:03.593 Normal NVM Subsystem Shutdown event: Not Supported 00:15:03.593 Zone Descriptor Change Notices: Not Supported 00:15:03.593 Discovery Log Change Notices: Not Supported 00:15:03.593 Controller Attributes 00:15:03.593 128-bit Host Identifier: Supported 00:15:03.593 Non-Operational Permissive Mode: Not Supported 00:15:03.593 NVM Sets: Not Supported 00:15:03.593 Read Recovery Levels: Not Supported 00:15:03.593 Endurance Groups: Not Supported 00:15:03.593 Predictable Latency Mode: Not Supported 00:15:03.593 Traffic Based Keep ALive: Not Supported 00:15:03.593 Namespace Granularity: Not Supported 00:15:03.593 SQ Associations: Not Supported 00:15:03.593 UUID List: Not Supported 00:15:03.593 Multi-Domain Subsystem: Not Supported 00:15:03.593 Fixed Capacity Management: Not Supported 00:15:03.593 Variable Capacity Management: Not Supported 00:15:03.593 Delete Endurance Group: Not Supported 00:15:03.593 Delete NVM Set: Not Supported 00:15:03.593 Extended LBA Formats Supported: Not Supported 00:15:03.593 Flexible Data Placement Supported: Not Supported 00:15:03.593 00:15:03.593 Controller Memory Buffer Support 00:15:03.593 ================================ 00:15:03.593 Supported: No 00:15:03.593 00:15:03.593 Persistent Memory Region Support 00:15:03.593 ================================ 00:15:03.593 Supported: No 00:15:03.593 00:15:03.593 Admin Command Set Attributes 00:15:03.593 ============================ 00:15:03.593 Security Send/Receive: Not Supported 00:15:03.593 Format NVM: Not Supported 00:15:03.593 Firmware Activate/Download: Not Supported 00:15:03.593 Namespace Management: Not Supported 00:15:03.593 Device Self-Test: Not Supported 00:15:03.593 Directives: Not Supported 00:15:03.593 NVMe-MI: Not Supported 00:15:03.593 Virtualization Management: Not Supported 00:15:03.593 Doorbell Buffer Config: Not Supported 00:15:03.593 Get LBA Status Capability: Not Supported 00:15:03.593 Command & Feature Lockdown Capability: Not Supported 00:15:03.593 Abort Command Limit: 4 00:15:03.593 Async Event Request Limit: 4 00:15:03.593 Number of Firmware Slots: N/A 00:15:03.593 Firmware Slot 1 Read-Only: N/A 00:15:03.593 Firmware Activation Without Reset: N/A 00:15:03.593 Multiple Update Detection Support: N/A 00:15:03.593 Firmware Update Granularity: No Information Provided 00:15:03.593 Per-Namespace SMART Log: No 00:15:03.593 Asymmetric Namespace Access Log Page: Not Supported 00:15:03.593 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:03.593 Command Effects Log Page: Supported 00:15:03.593 Get Log Page Extended Data: Supported 00:15:03.593 Telemetry Log Pages: Not Supported 00:15:03.593 Persistent Event Log Pages: Not Supported 00:15:03.593 Supported Log Pages Log Page: May Support 00:15:03.593 Commands Supported & Effects Log Page: Not Supported 00:15:03.593 Feature Identifiers & Effects Log Page:May Support 00:15:03.593 NVMe-MI Commands & Effects Log Page: May Support 00:15:03.593 Data Area 4 for Telemetry Log: Not Supported 00:15:03.593 Error Log Page Entries Supported: 128 00:15:03.593 Keep Alive: Supported 00:15:03.593 Keep Alive Granularity: 10000 ms 00:15:03.593 00:15:03.593 NVM Command Set Attributes 00:15:03.593 ========================== 00:15:03.593 Submission Queue Entry Size 00:15:03.593 Max: 64 00:15:03.593 Min: 64 00:15:03.593 Completion Queue Entry Size 00:15:03.593 Max: 16 00:15:03.593 Min: 16 00:15:03.593 Number of Namespaces: 32 00:15:03.593 Compare Command: Supported 00:15:03.593 Write Uncorrectable Command: Not Supported 00:15:03.593 Dataset Management Command: Supported 00:15:03.593 Write Zeroes Command: Supported 00:15:03.593 Set Features Save Field: Not Supported 00:15:03.593 Reservations: Not Supported 00:15:03.593 Timestamp: Not Supported 00:15:03.593 Copy: Supported 00:15:03.593 Volatile Write Cache: Present 00:15:03.593 Atomic Write Unit (Normal): 1 00:15:03.593 Atomic Write Unit (PFail): 1 00:15:03.593 Atomic Compare & Write Unit: 1 00:15:03.593 Fused Compare & Write: Supported 00:15:03.593 Scatter-Gather List 00:15:03.593 SGL Command Set: Supported (Dword aligned) 00:15:03.593 SGL Keyed: Not Supported 00:15:03.593 SGL Bit Bucket Descriptor: Not Supported 00:15:03.593 SGL Metadata Pointer: Not Supported 00:15:03.593 Oversized SGL: Not Supported 00:15:03.593 SGL Metadata Address: Not Supported 00:15:03.593 SGL Offset: Not Supported 00:15:03.593 Transport SGL Data Block: Not Supported 00:15:03.593 Replay Protected Memory Block: Not Supported 00:15:03.593 00:15:03.593 Firmware Slot Information 00:15:03.593 ========================= 00:15:03.593 Active slot: 1 00:15:03.593 Slot 1 Firmware Revision: 25.01 00:15:03.593 00:15:03.594 00:15:03.594 Commands Supported and Effects 00:15:03.594 ============================== 00:15:03.594 Admin Commands 00:15:03.594 -------------- 00:15:03.594 Get Log Page (02h): Supported 00:15:03.594 Identify (06h): Supported 00:15:03.594 Abort (08h): Supported 00:15:03.594 Set Features (09h): Supported 00:15:03.594 Get Features (0Ah): Supported 00:15:03.594 Asynchronous Event Request (0Ch): Supported 00:15:03.594 Keep Alive (18h): Supported 00:15:03.594 I/O Commands 00:15:03.594 ------------ 00:15:03.594 Flush (00h): Supported LBA-Change 00:15:03.594 Write (01h): Supported LBA-Change 00:15:03.594 Read (02h): Supported 00:15:03.594 Compare (05h): Supported 00:15:03.594 Write Zeroes (08h): Supported LBA-Change 00:15:03.594 Dataset Management (09h): Supported LBA-Change 00:15:03.594 Copy (19h): Supported LBA-Change 00:15:03.594 00:15:03.594 Error Log 00:15:03.594 ========= 00:15:03.594 00:15:03.594 Arbitration 00:15:03.594 =========== 00:15:03.594 Arbitration Burst: 1 00:15:03.594 00:15:03.594 Power Management 00:15:03.594 ================ 00:15:03.594 Number of Power States: 1 00:15:03.594 Current Power State: Power State #0 00:15:03.594 Power State #0: 00:15:03.594 Max Power: 0.00 W 00:15:03.594 Non-Operational State: Operational 00:15:03.594 Entry Latency: Not Reported 00:15:03.594 Exit Latency: Not Reported 00:15:03.594 Relative Read Throughput: 0 00:15:03.594 Relative Read Latency: 0 00:15:03.594 Relative Write Throughput: 0 00:15:03.594 Relative Write Latency: 0 00:15:03.594 Idle Power: Not Reported 00:15:03.594 Active Power: Not Reported 00:15:03.594 Non-Operational Permissive Mode: Not Supported 00:15:03.594 00:15:03.594 Health Information 00:15:03.594 ================== 00:15:03.594 Critical Warnings: 00:15:03.594 Available Spare Space: OK 00:15:03.594 Temperature: OK 00:15:03.594 Device Reliability: OK 00:15:03.594 Read Only: No 00:15:03.594 Volatile Memory Backup: OK 00:15:03.594 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:03.594 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:03.594 Available Spare: 0% 00:15:03.594 Available Sp[2024-11-06 13:11:45.289851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:03.594 [2024-11-06 13:11:45.297751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:03.594 [2024-11-06 13:11:45.297774] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:03.594 [2024-11-06 13:11:45.297782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.594 [2024-11-06 13:11:45.297786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.594 [2024-11-06 13:11:45.297791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.594 [2024-11-06 13:11:45.297795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.594 [2024-11-06 13:11:45.297839] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:03.594 [2024-11-06 13:11:45.297847] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:03.594 [2024-11-06 13:11:45.298844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.594 [2024-11-06 13:11:45.298880] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:03.594 [2024-11-06 13:11:45.298885] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:03.594 [2024-11-06 13:11:45.299848] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:03.594 [2024-11-06 13:11:45.299857] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:03.594 [2024-11-06 13:11:45.299901] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:03.594 [2024-11-06 13:11:45.300871] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:03.594 are Threshold: 0% 00:15:03.594 Life Percentage Used: 0% 00:15:03.594 Data Units Read: 0 00:15:03.594 Data Units Written: 0 00:15:03.594 Host Read Commands: 0 00:15:03.594 Host Write Commands: 0 00:15:03.594 Controller Busy Time: 0 minutes 00:15:03.594 Power Cycles: 0 00:15:03.594 Power On Hours: 0 hours 00:15:03.594 Unsafe Shutdowns: 0 00:15:03.594 Unrecoverable Media Errors: 0 00:15:03.594 Lifetime Error Log Entries: 0 00:15:03.594 Warning Temperature Time: 0 minutes 00:15:03.594 Critical Temperature Time: 0 minutes 00:15:03.594 00:15:03.594 Number of Queues 00:15:03.594 ================ 00:15:03.594 Number of I/O Submission Queues: 127 00:15:03.594 Number of I/O Completion Queues: 127 00:15:03.594 00:15:03.594 Active Namespaces 00:15:03.594 ================= 00:15:03.594 Namespace ID:1 00:15:03.594 Error Recovery Timeout: Unlimited 00:15:03.594 Command Set Identifier: NVM (00h) 00:15:03.594 Deallocate: Supported 00:15:03.594 Deallocated/Unwritten Error: Not Supported 00:15:03.594 Deallocated Read Value: Unknown 00:15:03.594 Deallocate in Write Zeroes: Not Supported 00:15:03.594 Deallocated Guard Field: 0xFFFF 00:15:03.594 Flush: Supported 00:15:03.594 Reservation: Supported 00:15:03.594 Namespace Sharing Capabilities: Multiple Controllers 00:15:03.594 Size (in LBAs): 131072 (0GiB) 00:15:03.594 Capacity (in LBAs): 131072 (0GiB) 00:15:03.594 Utilization (in LBAs): 131072 (0GiB) 00:15:03.594 NGUID: 5B9E84BEF6DF4983BD149187954BBAA0 00:15:03.594 UUID: 5b9e84be-f6df-4983-bd14-9187954bbaa0 00:15:03.594 Thin Provisioning: Not Supported 00:15:03.594 Per-NS Atomic Units: Yes 00:15:03.594 Atomic Boundary Size (Normal): 0 00:15:03.594 Atomic Boundary Size (PFail): 0 00:15:03.594 Atomic Boundary Offset: 0 00:15:03.594 Maximum Single Source Range Length: 65535 00:15:03.594 Maximum Copy Length: 65535 00:15:03.594 Maximum Source Range Count: 1 00:15:03.594 NGUID/EUI64 Never Reused: No 00:15:03.594 Namespace Write Protected: No 00:15:03.594 Number of LBA Formats: 1 00:15:03.594 Current LBA Format: LBA Format #00 00:15:03.594 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:03.594 00:15:03.594 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:03.594 [2024-11-06 13:11:45.491134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.876 Initializing NVMe Controllers 00:15:08.876 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:08.876 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:08.876 Initialization complete. Launching workers. 00:15:08.876 ======================================================== 00:15:08.876 Latency(us) 00:15:08.876 Device Information : IOPS MiB/s Average min max 00:15:08.876 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40038.19 156.40 3197.46 842.65 10781.94 00:15:08.876 ======================================================== 00:15:08.876 Total : 40038.19 156.40 3197.46 842.65 10781.94 00:15:08.876 00:15:08.876 [2024-11-06 13:11:50.603968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.876 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:09.136 [2024-11-06 13:11:50.795530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.415 Initializing NVMe Controllers 00:15:14.415 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.415 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:14.415 Initialization complete. Launching workers. 00:15:14.415 ======================================================== 00:15:14.415 Latency(us) 00:15:14.415 Device Information : IOPS MiB/s Average min max 00:15:14.415 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39965.01 156.11 3202.67 852.91 7774.24 00:15:14.415 ======================================================== 00:15:14.415 Total : 39965.01 156.11 3202.67 852.91 7774.24 00:15:14.415 00:15:14.415 [2024-11-06 13:11:55.813834] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.415 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:14.415 [2024-11-06 13:11:56.019138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.692 [2024-11-06 13:12:01.161843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.692 Initializing NVMe Controllers 00:15:19.692 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.692 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:19.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:19.692 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:19.692 Initialization complete. Launching workers. 00:15:19.692 Starting thread on core 2 00:15:19.692 Starting thread on core 3 00:15:19.692 Starting thread on core 1 00:15:19.692 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:19.692 [2024-11-06 13:12:01.414212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.986 [2024-11-06 13:12:04.472181] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.986 Initializing NVMe Controllers 00:15:22.986 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.986 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.986 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:22.986 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:22.986 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:22.986 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:22.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:22.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:22.986 Initialization complete. Launching workers. 00:15:22.986 Starting thread on core 1 with urgent priority queue 00:15:22.986 Starting thread on core 2 with urgent priority queue 00:15:22.986 Starting thread on core 3 with urgent priority queue 00:15:22.986 Starting thread on core 0 with urgent priority queue 00:15:22.986 SPDK bdev Controller (SPDK2 ) core 0: 10322.67 IO/s 9.69 secs/100000 ios 00:15:22.986 SPDK bdev Controller (SPDK2 ) core 1: 12211.33 IO/s 8.19 secs/100000 ios 00:15:22.986 SPDK bdev Controller (SPDK2 ) core 2: 8360.33 IO/s 11.96 secs/100000 ios 00:15:22.986 SPDK bdev Controller (SPDK2 ) core 3: 13479.33 IO/s 7.42 secs/100000 ios 00:15:22.986 ======================================================== 00:15:22.986 00:15:22.986 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:22.986 [2024-11-06 13:12:04.711141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.986 Initializing NVMe Controllers 00:15:22.986 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.986 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.986 Namespace ID: 1 size: 0GB 00:15:22.986 Initialization complete. 00:15:22.986 INFO: using host memory buffer for IO 00:15:22.986 Hello world! 00:15:22.986 [2024-11-06 13:12:04.721193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.986 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:23.246 [2024-11-06 13:12:04.950349] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.185 Initializing NVMe Controllers 00:15:24.185 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:24.185 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:24.185 Initialization complete. Launching workers. 00:15:24.185 submit (in ns) avg, min, max = 5143.4, 2838.3, 3998448.3 00:15:24.185 complete (in ns) avg, min, max = 15871.8, 1691.7, 3997458.3 00:15:24.185 00:15:24.185 Submit histogram 00:15:24.185 ================ 00:15:24.185 Range in us Cumulative Count 00:15:24.185 2.827 - 2.840: 0.0343% ( 7) 00:15:24.185 2.840 - 2.853: 0.7995% ( 156) 00:15:24.185 2.853 - 2.867: 2.8692% ( 422) 00:15:24.185 2.867 - 2.880: 6.3711% ( 714) 00:15:24.185 2.880 - 2.893: 11.4375% ( 1033) 00:15:24.185 2.893 - 2.907: 16.9994% ( 1134) 00:15:24.185 2.907 - 2.920: 21.9383% ( 1007) 00:15:24.185 2.920 - 2.933: 26.9606% ( 1024) 00:15:24.185 2.933 - 2.947: 32.0369% ( 1035) 00:15:24.185 2.947 - 2.960: 37.3044% ( 1074) 00:15:24.185 2.960 - 2.973: 43.0526% ( 1172) 00:15:24.185 2.973 - 2.987: 48.4183% ( 1094) 00:15:24.185 2.987 - 3.000: 55.3043% ( 1404) 00:15:24.185 3.000 - 3.013: 63.7991% ( 1732) 00:15:24.185 3.013 - 3.027: 72.2792% ( 1729) 00:15:24.185 3.027 - 3.040: 79.4987% ( 1472) 00:15:24.185 3.040 - 3.053: 86.4829% ( 1424) 00:15:24.185 3.053 - 3.067: 91.6769% ( 1059) 00:15:24.185 3.067 - 3.080: 95.1248% ( 703) 00:15:24.185 3.080 - 3.093: 97.2878% ( 441) 00:15:24.185 3.093 - 3.107: 98.5580% ( 259) 00:15:24.185 3.107 - 3.120: 99.0976% ( 110) 00:15:24.185 3.120 - 3.133: 99.3232% ( 46) 00:15:24.185 3.133 - 3.147: 99.4605% ( 28) 00:15:24.185 3.147 - 3.160: 99.5537% ( 19) 00:15:24.185 3.160 - 3.173: 99.5831% ( 6) 00:15:24.185 3.173 - 3.187: 99.5880% ( 1) 00:15:24.185 3.187 - 3.200: 99.5929% ( 1) 00:15:24.186 3.213 - 3.227: 99.5978% ( 1) 00:15:24.186 3.293 - 3.307: 99.6027% ( 1) 00:15:24.186 3.520 - 3.547: 99.6076% ( 1) 00:15:24.186 3.600 - 3.627: 99.6125% ( 1) 00:15:24.186 3.627 - 3.653: 99.6174% ( 1) 00:15:24.186 3.760 - 3.787: 99.6223% ( 1) 00:15:24.186 4.293 - 4.320: 99.6272% ( 1) 00:15:24.186 4.320 - 4.347: 99.6371% ( 2) 00:15:24.186 4.347 - 4.373: 99.6469% ( 2) 00:15:24.186 4.453 - 4.480: 99.6518% ( 1) 00:15:24.186 4.480 - 4.507: 99.6567% ( 1) 00:15:24.186 4.613 - 4.640: 99.6616% ( 1) 00:15:24.186 4.640 - 4.667: 99.6714% ( 2) 00:15:24.186 4.667 - 4.693: 99.6763% ( 1) 00:15:24.186 4.693 - 4.720: 99.6812% ( 1) 00:15:24.186 4.720 - 4.747: 99.6861% ( 1) 00:15:24.186 4.747 - 4.773: 99.7008% ( 3) 00:15:24.186 4.827 - 4.853: 99.7057% ( 1) 00:15:24.186 4.853 - 4.880: 99.7106% ( 1) 00:15:24.186 4.880 - 4.907: 99.7155% ( 1) 00:15:24.186 4.907 - 4.933: 99.7253% ( 2) 00:15:24.186 4.960 - 4.987: 99.7401% ( 3) 00:15:24.186 4.987 - 5.013: 99.7597% ( 4) 00:15:24.186 5.013 - 5.040: 99.7646% ( 1) 00:15:24.186 5.040 - 5.067: 99.7744% ( 2) 00:15:24.186 5.067 - 5.093: 99.7891% ( 3) 00:15:24.186 5.093 - 5.120: 99.7989% ( 2) 00:15:24.186 5.120 - 5.147: 99.8038% ( 1) 00:15:24.186 5.147 - 5.173: 99.8136% ( 2) 00:15:24.186 5.173 - 5.200: 99.8185% ( 1) 00:15:24.186 5.200 - 5.227: 99.8234% ( 1) 00:15:24.186 5.227 - 5.253: 99.8283% ( 1) 00:15:24.186 5.253 - 5.280: 99.8431% ( 3) 00:15:24.186 5.280 - 5.307: 99.8480% ( 1) 00:15:24.186 5.547 - 5.573: 99.8529% ( 1) 00:15:24.186 5.653 - 5.680: 99.8578% ( 1) 00:15:24.186 5.680 - 5.707: 99.8627% ( 1) 00:15:24.186 5.707 - 5.733: 99.8676% ( 1) 00:15:24.186 5.733 - 5.760: 99.8725% ( 1) 00:15:24.186 5.760 - 5.787: 99.8823% ( 2) 00:15:24.186 5.920 - 5.947: 99.8921% ( 2) 00:15:24.186 5.947 - 5.973: 99.8970% ( 1) 00:15:24.186 6.187 - 6.213: 99.9019% ( 1) 00:15:24.186 6.373 - 6.400: 99.9117% ( 2) 00:15:24.186 6.400 - 6.427: 99.9166% ( 1) 00:15:24.186 6.427 - 6.453: 99.9215% ( 1) 00:15:24.186 6.667 - 6.693: 99.9313% ( 2) 00:15:24.186 6.773 - 6.800: 99.9362% ( 1) 00:15:24.186 8.000 - 8.053: 99.9411% ( 1) 00:15:24.186 10.187 - 10.240: 99.9460% ( 1) 00:15:24.186 3986.773 - 4014.080: 100.0000% ( 11) 00:15:24.186 00:15:24.186 [2024-11-06 13:12:06.044274] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.186 Complete histogram 00:15:24.186 ================== 00:15:24.186 Range in us Cumulative Count 00:15:24.186 1.687 - 1.693: 0.0098% ( 2) 00:15:24.186 1.693 - 1.700: 0.2109% ( 41) 00:15:24.186 1.700 - 1.707: 0.8779% ( 136) 00:15:24.186 1.707 - 1.720: 1.9226% ( 213) 00:15:24.186 1.720 - 1.733: 8.0534% ( 1250) 00:15:24.186 1.733 - 1.747: 15.1454% ( 1446) 00:15:24.186 1.747 - 1.760: 54.6864% ( 8062) 00:15:24.186 1.760 - 1.773: 76.3647% ( 4420) 00:15:24.186 1.773 - 1.787: 82.6769% ( 1287) 00:15:24.186 1.787 - 1.800: 84.4377% ( 359) 00:15:24.186 1.800 - 1.813: 87.4050% ( 605) 00:15:24.186 1.813 - 1.827: 92.1085% ( 959) 00:15:24.186 1.827 - 1.840: 96.4344% ( 882) 00:15:24.186 1.840 - 1.853: 98.6169% ( 445) 00:15:24.186 1.853 - 1.867: 99.3232% ( 144) 00:15:24.186 1.867 - 1.880: 99.4997% ( 36) 00:15:24.186 1.880 - 1.893: 99.5193% ( 4) 00:15:24.186 1.893 - 1.907: 99.5243% ( 1) 00:15:24.186 1.907 - 1.920: 99.5292% ( 1) 00:15:24.186 3.227 - 3.240: 99.5341% ( 1) 00:15:24.186 3.240 - 3.253: 99.5390% ( 1) 00:15:24.186 3.320 - 3.333: 99.5439% ( 1) 00:15:24.186 3.333 - 3.347: 99.5488% ( 1) 00:15:24.186 3.400 - 3.413: 99.5537% ( 1) 00:15:24.186 3.413 - 3.440: 99.5586% ( 1) 00:15:24.186 3.440 - 3.467: 99.5635% ( 1) 00:15:24.186 3.493 - 3.520: 99.5684% ( 1) 00:15:24.186 3.573 - 3.600: 99.5733% ( 1) 00:15:24.186 3.600 - 3.627: 99.5782% ( 1) 00:15:24.186 3.733 - 3.760: 99.5831% ( 1) 00:15:24.186 3.867 - 3.893: 99.5880% ( 1) 00:15:24.186 4.053 - 4.080: 99.5978% ( 2) 00:15:24.186 4.080 - 4.107: 99.6027% ( 1) 00:15:24.186 4.107 - 4.133: 99.6076% ( 1) 00:15:24.186 4.213 - 4.240: 99.6125% ( 1) 00:15:24.186 4.347 - 4.373: 99.6174% ( 1) 00:15:24.186 4.373 - 4.400: 99.6223% ( 1) 00:15:24.186 5.120 - 5.147: 99.6322% ( 2) 00:15:24.186 5.467 - 5.493: 99.6371% ( 1) 00:15:24.186 5.600 - 5.627: 99.6420% ( 1) 00:15:24.186 7.840 - 7.893: 99.6469% ( 1) 00:15:24.186 3986.773 - 4014.080: 100.0000% ( 72) 00:15:24.186 00:15:24.186 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:24.186 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:24.186 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:24.186 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:24.186 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:24.446 [ 00:15:24.446 { 00:15:24.446 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:24.446 "subtype": "Discovery", 00:15:24.446 "listen_addresses": [], 00:15:24.446 "allow_any_host": true, 00:15:24.446 "hosts": [] 00:15:24.446 }, 00:15:24.446 { 00:15:24.446 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:24.446 "subtype": "NVMe", 00:15:24.446 "listen_addresses": [ 00:15:24.446 { 00:15:24.446 "trtype": "VFIOUSER", 00:15:24.446 "adrfam": "IPv4", 00:15:24.446 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:24.446 "trsvcid": "0" 00:15:24.446 } 00:15:24.446 ], 00:15:24.446 "allow_any_host": true, 00:15:24.446 "hosts": [], 00:15:24.446 "serial_number": "SPDK1", 00:15:24.446 "model_number": "SPDK bdev Controller", 00:15:24.446 "max_namespaces": 32, 00:15:24.446 "min_cntlid": 1, 00:15:24.446 "max_cntlid": 65519, 00:15:24.446 "namespaces": [ 00:15:24.446 { 00:15:24.446 "nsid": 1, 00:15:24.446 "bdev_name": "Malloc1", 00:15:24.446 "name": "Malloc1", 00:15:24.446 "nguid": "EDF6474C3544466E908478E67C70D1C1", 00:15:24.446 "uuid": "edf6474c-3544-466e-9084-78e67c70d1c1" 00:15:24.446 }, 00:15:24.446 { 00:15:24.446 "nsid": 2, 00:15:24.446 "bdev_name": "Malloc3", 00:15:24.446 "name": "Malloc3", 00:15:24.446 "nguid": "CDEC18133C104CF0A9F5EEB830288906", 00:15:24.446 "uuid": "cdec1813-3c10-4cf0-a9f5-eeb830288906" 00:15:24.446 } 00:15:24.446 ] 00:15:24.446 }, 00:15:24.446 { 00:15:24.446 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:24.446 "subtype": "NVMe", 00:15:24.446 "listen_addresses": [ 00:15:24.446 { 00:15:24.446 "trtype": "VFIOUSER", 00:15:24.446 "adrfam": "IPv4", 00:15:24.446 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:24.446 "trsvcid": "0" 00:15:24.446 } 00:15:24.446 ], 00:15:24.446 "allow_any_host": true, 00:15:24.446 "hosts": [], 00:15:24.446 "serial_number": "SPDK2", 00:15:24.446 "model_number": "SPDK bdev Controller", 00:15:24.446 "max_namespaces": 32, 00:15:24.446 "min_cntlid": 1, 00:15:24.446 "max_cntlid": 65519, 00:15:24.446 "namespaces": [ 00:15:24.446 { 00:15:24.446 "nsid": 1, 00:15:24.447 "bdev_name": "Malloc2", 00:15:24.447 "name": "Malloc2", 00:15:24.447 "nguid": "5B9E84BEF6DF4983BD149187954BBAA0", 00:15:24.447 "uuid": "5b9e84be-f6df-4983-bd14-9187954bbaa0" 00:15:24.447 } 00:15:24.447 ] 00:15:24.447 } 00:15:24.447 ] 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1684395 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:24.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:24.706 [2024-11-06 13:12:06.420915] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.706 Malloc4 00:15:24.706 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:24.966 [2024-11-06 13:12:06.609183] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.966 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:24.966 Asynchronous Event Request test 00:15:24.966 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:24.966 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:24.966 Registering asynchronous event callbacks... 00:15:24.966 Starting namespace attribute notice tests for all controllers... 00:15:24.966 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:24.966 aer_cb - Changed Namespace 00:15:24.966 Cleaning up... 00:15:24.966 [ 00:15:24.966 { 00:15:24.966 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:24.966 "subtype": "Discovery", 00:15:24.966 "listen_addresses": [], 00:15:24.966 "allow_any_host": true, 00:15:24.966 "hosts": [] 00:15:24.966 }, 00:15:24.966 { 00:15:24.966 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:24.966 "subtype": "NVMe", 00:15:24.966 "listen_addresses": [ 00:15:24.966 { 00:15:24.966 "trtype": "VFIOUSER", 00:15:24.966 "adrfam": "IPv4", 00:15:24.966 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:24.966 "trsvcid": "0" 00:15:24.966 } 00:15:24.966 ], 00:15:24.966 "allow_any_host": true, 00:15:24.966 "hosts": [], 00:15:24.966 "serial_number": "SPDK1", 00:15:24.966 "model_number": "SPDK bdev Controller", 00:15:24.966 "max_namespaces": 32, 00:15:24.966 "min_cntlid": 1, 00:15:24.966 "max_cntlid": 65519, 00:15:24.966 "namespaces": [ 00:15:24.966 { 00:15:24.966 "nsid": 1, 00:15:24.966 "bdev_name": "Malloc1", 00:15:24.966 "name": "Malloc1", 00:15:24.966 "nguid": "EDF6474C3544466E908478E67C70D1C1", 00:15:24.966 "uuid": "edf6474c-3544-466e-9084-78e67c70d1c1" 00:15:24.966 }, 00:15:24.966 { 00:15:24.966 "nsid": 2, 00:15:24.966 "bdev_name": "Malloc3", 00:15:24.966 "name": "Malloc3", 00:15:24.966 "nguid": "CDEC18133C104CF0A9F5EEB830288906", 00:15:24.966 "uuid": "cdec1813-3c10-4cf0-a9f5-eeb830288906" 00:15:24.966 } 00:15:24.966 ] 00:15:24.966 }, 00:15:24.966 { 00:15:24.966 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:24.966 "subtype": "NVMe", 00:15:24.966 "listen_addresses": [ 00:15:24.966 { 00:15:24.966 "trtype": "VFIOUSER", 00:15:24.966 "adrfam": "IPv4", 00:15:24.966 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:24.966 "trsvcid": "0" 00:15:24.966 } 00:15:24.966 ], 00:15:24.966 "allow_any_host": true, 00:15:24.966 "hosts": [], 00:15:24.966 "serial_number": "SPDK2", 00:15:24.966 "model_number": "SPDK bdev Controller", 00:15:24.966 "max_namespaces": 32, 00:15:24.966 "min_cntlid": 1, 00:15:24.966 "max_cntlid": 65519, 00:15:24.966 "namespaces": [ 00:15:24.966 { 00:15:24.966 "nsid": 1, 00:15:24.966 "bdev_name": "Malloc2", 00:15:24.966 "name": "Malloc2", 00:15:24.966 "nguid": "5B9E84BEF6DF4983BD149187954BBAA0", 00:15:24.966 "uuid": "5b9e84be-f6df-4983-bd14-9187954bbaa0" 00:15:24.966 }, 00:15:24.966 { 00:15:24.967 "nsid": 2, 00:15:24.967 "bdev_name": "Malloc4", 00:15:24.967 "name": "Malloc4", 00:15:24.967 "nguid": "91C885D3B8404794AEDA3E297AF73ECE", 00:15:24.967 "uuid": "91c885d3-b840-4794-aeda-3e297af73ece" 00:15:24.967 } 00:15:24.967 ] 00:15:24.967 } 00:15:24.967 ] 00:15:24.967 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1684395 00:15:24.967 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:24.967 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1675300 00:15:24.967 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1675300 ']' 00:15:24.967 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1675300 00:15:24.967 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:24.967 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:24.967 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1675300 00:15:25.227 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:25.227 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:25.227 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1675300' 00:15:25.227 killing process with pid 1675300 00:15:25.227 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1675300 00:15:25.227 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1675300 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1684508 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1684508' 00:15:25.227 Process pid: 1684508 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1684508 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1684508 ']' 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:25.227 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:25.227 [2024-11-06 13:12:07.092004] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:25.227 [2024-11-06 13:12:07.092915] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:15:25.227 [2024-11-06 13:12:07.092954] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.487 [2024-11-06 13:12:07.175623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.487 [2024-11-06 13:12:07.204727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.487 [2024-11-06 13:12:07.204762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.487 [2024-11-06 13:12:07.204768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.487 [2024-11-06 13:12:07.204773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.487 [2024-11-06 13:12:07.204777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.487 [2024-11-06 13:12:07.206007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.487 [2024-11-06 13:12:07.206170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.487 [2024-11-06 13:12:07.206326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.487 [2024-11-06 13:12:07.206327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.487 [2024-11-06 13:12:07.256515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:25.487 [2024-11-06 13:12:07.256926] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:25.487 [2024-11-06 13:12:07.257684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:25.487 [2024-11-06 13:12:07.258387] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:25.487 [2024-11-06 13:12:07.258530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:26.058 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:26.058 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:26.058 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:27.439 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:27.439 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:27.439 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:27.439 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:27.439 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:27.439 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:27.439 Malloc1 00:15:27.699 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:27.699 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:27.960 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:28.221 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:28.221 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:28.221 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:28.481 Malloc2 00:15:28.481 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:28.481 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:28.740 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1684508 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1684508 ']' 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1684508 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1684508 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1684508' 00:15:29.002 killing process with pid 1684508 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1684508 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1684508 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:29.002 00:15:29.002 real 0m51.024s 00:15:29.002 user 3m15.361s 00:15:29.002 sys 0m2.658s 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:29.002 ************************************ 00:15:29.002 END TEST nvmf_vfio_user 00:15:29.002 ************************************ 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:29.002 13:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.264 ************************************ 00:15:29.264 START TEST nvmf_vfio_user_nvme_compliance 00:15:29.264 ************************************ 00:15:29.264 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:29.264 * Looking for test storage... 00:15:29.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.264 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:29.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.265 --rc genhtml_branch_coverage=1 00:15:29.265 --rc genhtml_function_coverage=1 00:15:29.265 --rc genhtml_legend=1 00:15:29.265 --rc geninfo_all_blocks=1 00:15:29.265 --rc geninfo_unexecuted_blocks=1 00:15:29.265 00:15:29.265 ' 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:29.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.265 --rc genhtml_branch_coverage=1 00:15:29.265 --rc genhtml_function_coverage=1 00:15:29.265 --rc genhtml_legend=1 00:15:29.265 --rc geninfo_all_blocks=1 00:15:29.265 --rc geninfo_unexecuted_blocks=1 00:15:29.265 00:15:29.265 ' 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:29.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.265 --rc genhtml_branch_coverage=1 00:15:29.265 --rc genhtml_function_coverage=1 00:15:29.265 --rc genhtml_legend=1 00:15:29.265 --rc geninfo_all_blocks=1 00:15:29.265 --rc geninfo_unexecuted_blocks=1 00:15:29.265 00:15:29.265 ' 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:29.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.265 --rc genhtml_branch_coverage=1 00:15:29.265 --rc genhtml_function_coverage=1 00:15:29.265 --rc genhtml_legend=1 00:15:29.265 --rc geninfo_all_blocks=1 00:15:29.265 --rc geninfo_unexecuted_blocks=1 00:15:29.265 00:15:29.265 ' 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.265 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1685759 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1685759' 00:15:29.527 Process pid: 1685759 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1685759 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 1685759 ']' 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:29.527 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.527 [2024-11-06 13:12:11.234648] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:15:29.527 [2024-11-06 13:12:11.234720] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.527 [2024-11-06 13:12:11.321484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:29.527 [2024-11-06 13:12:11.357001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.527 [2024-11-06 13:12:11.357033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.527 [2024-11-06 13:12:11.357039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.527 [2024-11-06 13:12:11.357044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.527 [2024-11-06 13:12:11.357049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.527 [2024-11-06 13:12:11.358330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.527 [2024-11-06 13:12:11.358490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.527 [2024-11-06 13:12:11.358492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.469 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:30.469 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:30.469 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:31.408 malloc0 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.408 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:31.408 00:15:31.408 00:15:31.408 CUnit - A unit testing framework for C - Version 2.1-3 00:15:31.408 http://cunit.sourceforge.net/ 00:15:31.408 00:15:31.408 00:15:31.408 Suite: nvme_compliance 00:15:31.408 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 13:12:13.286835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.408 [2024-11-06 13:12:13.288138] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:31.408 [2024-11-06 13:12:13.288150] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:31.408 [2024-11-06 13:12:13.288155] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:31.408 [2024-11-06 13:12:13.289852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.669 passed 00:15:31.669 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 13:12:13.364332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.669 [2024-11-06 13:12:13.367349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.669 passed 00:15:31.669 Test: admin_identify_ns ...[2024-11-06 13:12:13.446906] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.669 [2024-11-06 13:12:13.507754] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:31.669 [2024-11-06 13:12:13.515758] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:31.669 [2024-11-06 13:12:13.536830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.669 passed 00:15:31.928 Test: admin_get_features_mandatory_features ...[2024-11-06 13:12:13.610048] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.928 [2024-11-06 13:12:13.613070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.928 passed 00:15:31.928 Test: admin_get_features_optional_features ...[2024-11-06 13:12:13.689519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.928 [2024-11-06 13:12:13.692542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.928 passed 00:15:31.928 Test: admin_set_features_number_of_queues ...[2024-11-06 13:12:13.767108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.188 [2024-11-06 13:12:13.875838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.188 passed 00:15:32.188 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 13:12:13.948071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.188 [2024-11-06 13:12:13.951090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.188 passed 00:15:32.188 Test: admin_get_log_page_with_lpo ...[2024-11-06 13:12:14.026823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.448 [2024-11-06 13:12:14.095755] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:32.448 [2024-11-06 13:12:14.108804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.448 passed 00:15:32.448 Test: fabric_property_get ...[2024-11-06 13:12:14.183870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.448 [2024-11-06 13:12:14.185072] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:32.448 [2024-11-06 13:12:14.186887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.448 passed 00:15:32.448 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 13:12:14.264381] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.448 [2024-11-06 13:12:14.265570] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:32.448 [2024-11-06 13:12:14.267402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.448 passed 00:15:32.448 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 13:12:14.342174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.708 [2024-11-06 13:12:14.426752] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:32.708 [2024-11-06 13:12:14.442752] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:32.708 [2024-11-06 13:12:14.447831] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.708 passed 00:15:32.708 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 13:12:14.521041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.708 [2024-11-06 13:12:14.522246] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:32.708 [2024-11-06 13:12:14.524061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.708 passed 00:15:32.708 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 13:12:14.600110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.968 [2024-11-06 13:12:14.675761] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:32.968 [2024-11-06 13:12:14.699752] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:32.968 [2024-11-06 13:12:14.704824] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.968 passed 00:15:32.968 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 13:12:14.780870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.968 [2024-11-06 13:12:14.782067] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:32.968 [2024-11-06 13:12:14.782088] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:32.968 [2024-11-06 13:12:14.783886] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.968 passed 00:15:32.968 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 13:12:14.857626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.228 [2024-11-06 13:12:14.949753] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:33.228 [2024-11-06 13:12:14.957753] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:33.228 [2024-11-06 13:12:14.965752] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:33.228 [2024-11-06 13:12:14.973757] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:33.228 [2024-11-06 13:12:15.002812] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.228 passed 00:15:33.228 Test: admin_create_io_sq_verify_pc ...[2024-11-06 13:12:15.075994] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.228 [2024-11-06 13:12:15.092757] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:33.228 [2024-11-06 13:12:15.110153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.489 passed 00:15:33.489 Test: admin_create_io_qp_max_qps ...[2024-11-06 13:12:15.188623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.428 [2024-11-06 13:12:16.286755] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:34.997 [2024-11-06 13:12:16.670332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.997 passed 00:15:34.997 Test: admin_create_io_sq_shared_cq ...[2024-11-06 13:12:16.748143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.997 [2024-11-06 13:12:16.886750] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:35.257 [2024-11-06 13:12:16.923801] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.257 passed 00:15:35.257 00:15:35.257 Run Summary: Type Total Ran Passed Failed Inactive 00:15:35.257 suites 1 1 n/a 0 0 00:15:35.257 tests 18 18 18 0 0 00:15:35.257 asserts 360 360 360 0 n/a 00:15:35.257 00:15:35.257 Elapsed time = 1.495 seconds 00:15:35.257 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1685759 00:15:35.257 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 1685759 ']' 00:15:35.258 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 1685759 00:15:35.258 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:35.258 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:35.258 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1685759 00:15:35.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:35.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:35.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1685759' 00:15:35.258 killing process with pid 1685759 00:15:35.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 1685759 00:15:35.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 1685759 00:15:35.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:35.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:35.258 00:15:35.258 real 0m6.209s 00:15:35.258 user 0m17.595s 00:15:35.258 sys 0m0.540s 00:15:35.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:35.258 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.258 ************************************ 00:15:35.258 END TEST nvmf_vfio_user_nvme_compliance 00:15:35.258 ************************************ 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.519 ************************************ 00:15:35.519 START TEST nvmf_vfio_user_fuzz 00:15:35.519 ************************************ 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:35.519 * Looking for test storage... 00:15:35.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:35.519 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.520 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.520 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.520 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:35.520 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.520 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:35.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.520 --rc genhtml_branch_coverage=1 00:15:35.520 --rc genhtml_function_coverage=1 00:15:35.520 --rc genhtml_legend=1 00:15:35.520 --rc geninfo_all_blocks=1 00:15:35.520 --rc geninfo_unexecuted_blocks=1 00:15:35.520 00:15:35.520 ' 00:15:35.520 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:35.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.520 --rc genhtml_branch_coverage=1 00:15:35.520 --rc genhtml_function_coverage=1 00:15:35.520 --rc genhtml_legend=1 00:15:35.520 --rc geninfo_all_blocks=1 00:15:35.520 --rc geninfo_unexecuted_blocks=1 00:15:35.520 00:15:35.520 ' 00:15:35.520 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:35.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.520 --rc genhtml_branch_coverage=1 00:15:35.520 --rc genhtml_function_coverage=1 00:15:35.520 --rc genhtml_legend=1 00:15:35.520 --rc geninfo_all_blocks=1 00:15:35.520 --rc geninfo_unexecuted_blocks=1 00:15:35.520 00:15:35.520 ' 00:15:35.520 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:35.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.520 --rc genhtml_branch_coverage=1 00:15:35.520 --rc genhtml_function_coverage=1 00:15:35.520 --rc genhtml_legend=1 00:15:35.520 --rc geninfo_all_blocks=1 00:15:35.520 --rc geninfo_unexecuted_blocks=1 00:15:35.520 00:15:35.520 ' 00:15:35.520 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:35.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:35.781 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1687125 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1687125' 00:15:35.782 Process pid: 1687125 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1687125 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 1687125 ']' 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:35.782 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.721 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:36.721 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:36.721 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.663 malloc0 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:37.663 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:09.887 Fuzzing completed. Shutting down the fuzz application 00:16:09.887 00:16:09.887 Dumping successful admin opcodes: 00:16:09.887 8, 9, 10, 24, 00:16:09.887 Dumping successful io opcodes: 00:16:09.887 0, 00:16:09.887 NS: 0x20000081ef00 I/O qp, Total commands completed: 1340990, total successful commands: 5261, random_seed: 2077435328 00:16:09.887 NS: 0x20000081ef00 admin qp, Total commands completed: 296734, total successful commands: 2396, random_seed: 4122995520 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1687125 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 1687125 ']' 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 1687125 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1687125 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1687125' 00:16:09.887 killing process with pid 1687125 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 1687125 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 1687125 00:16:09.887 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:09.887 00:16:09.887 real 0m32.784s 00:16:09.887 user 0m37.686s 00:16:09.887 sys 0m23.548s 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:09.887 ************************************ 00:16:09.887 END TEST nvmf_vfio_user_fuzz 00:16:09.887 ************************************ 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:09.887 ************************************ 00:16:09.887 START TEST nvmf_auth_target 00:16:09.887 ************************************ 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:09.887 * Looking for test storage... 00:16:09.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.887 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:09.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.888 --rc genhtml_branch_coverage=1 00:16:09.888 --rc genhtml_function_coverage=1 00:16:09.888 --rc genhtml_legend=1 00:16:09.888 --rc geninfo_all_blocks=1 00:16:09.888 --rc geninfo_unexecuted_blocks=1 00:16:09.888 00:16:09.888 ' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:09.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.888 --rc genhtml_branch_coverage=1 00:16:09.888 --rc genhtml_function_coverage=1 00:16:09.888 --rc genhtml_legend=1 00:16:09.888 --rc geninfo_all_blocks=1 00:16:09.888 --rc geninfo_unexecuted_blocks=1 00:16:09.888 00:16:09.888 ' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:09.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.888 --rc genhtml_branch_coverage=1 00:16:09.888 --rc genhtml_function_coverage=1 00:16:09.888 --rc genhtml_legend=1 00:16:09.888 --rc geninfo_all_blocks=1 00:16:09.888 --rc geninfo_unexecuted_blocks=1 00:16:09.888 00:16:09.888 ' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:09.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.888 --rc genhtml_branch_coverage=1 00:16:09.888 --rc genhtml_function_coverage=1 00:16:09.888 --rc genhtml_legend=1 00:16:09.888 --rc geninfo_all_blocks=1 00:16:09.888 --rc geninfo_unexecuted_blocks=1 00:16:09.888 00:16:09.888 ' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:09.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:09.888 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:09.889 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.473 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:16.474 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:16.474 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:16.474 Found net devices under 0000:31:00.0: cvl_0_0 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:16.474 Found net devices under 0000:31:00.1: cvl_0_1 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.474 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:16.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:16:16.475 00:16:16.475 --- 10.0.0.2 ping statistics --- 00:16:16.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.475 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:16:16.475 00:16:16.475 --- 10.0.0.1 ping statistics --- 00:16:16.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.475 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1697147 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1697147 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1697147 ']' 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:16.475 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1697490 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e28b8a83b21b25ba6134ff6a4aae5601687f1643ffc467f9 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Jjo 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e28b8a83b21b25ba6134ff6a4aae5601687f1643ffc467f9 0 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e28b8a83b21b25ba6134ff6a4aae5601687f1643ffc467f9 0 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e28b8a83b21b25ba6134ff6a4aae5601687f1643ffc467f9 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Jjo 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Jjo 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Jjo 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc831e9a8879cc139c9cc41a6a99b8316903af3f8a2da8cf4ed845af640afb3f 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iFl 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc831e9a8879cc139c9cc41a6a99b8316903af3f8a2da8cf4ed845af640afb3f 3 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc831e9a8879cc139c9cc41a6a99b8316903af3f8a2da8cf4ed845af640afb3f 3 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc831e9a8879cc139c9cc41a6a99b8316903af3f8a2da8cf4ed845af640afb3f 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:17.047 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iFl 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iFl 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.iFl 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4c017a2334153ea972e43b80b68ff90e 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HwX 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4c017a2334153ea972e43b80b68ff90e 1 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4c017a2334153ea972e43b80b68ff90e 1 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4c017a2334153ea972e43b80b68ff90e 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:17.309 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.309 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HwX 00:16:17.309 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HwX 00:16:17.309 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.HwX 00:16:17.309 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:17.309 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=124e45a8ed4a288f8627d23818cf0c0bb9b8b932ad8bb1bc 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.N3W 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 124e45a8ed4a288f8627d23818cf0c0bb9b8b932ad8bb1bc 2 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 124e45a8ed4a288f8627d23818cf0c0bb9b8b932ad8bb1bc 2 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=124e45a8ed4a288f8627d23818cf0c0bb9b8b932ad8bb1bc 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.N3W 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.N3W 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.N3W 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=368b6ac7e90c86721a4d4f1b17b49b6d2ce2a0b9524b806c 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lvX 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 368b6ac7e90c86721a4d4f1b17b49b6d2ce2a0b9524b806c 2 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 368b6ac7e90c86721a4d4f1b17b49b6d2ce2a0b9524b806c 2 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=368b6ac7e90c86721a4d4f1b17b49b6d2ce2a0b9524b806c 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lvX 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lvX 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lvX 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d7e21c02592bb68afe9b27101c366861 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.w7M 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d7e21c02592bb68afe9b27101c366861 1 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d7e21c02592bb68afe9b27101c366861 1 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d7e21c02592bb68afe9b27101c366861 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:17.310 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.w7M 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.w7M 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.w7M 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a286317d9ede6ee2764c7e59bccb02b11c835b4729ba02a15a055ff54addc8ae 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hNA 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a286317d9ede6ee2764c7e59bccb02b11c835b4729ba02a15a055ff54addc8ae 3 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a286317d9ede6ee2764c7e59bccb02b11c835b4729ba02a15a055ff54addc8ae 3 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a286317d9ede6ee2764c7e59bccb02b11c835b4729ba02a15a055ff54addc8ae 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hNA 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hNA 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.hNA 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1697147 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1697147 ']' 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:17.572 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1697490 /var/tmp/host.sock 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1697490 ']' 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:17.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.833 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Jjo 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Jjo 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Jjo 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.iFl ]] 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iFl 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iFl 00:16:18.093 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iFl 00:16:18.354 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:18.354 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HwX 00:16:18.354 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.354 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.354 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.354 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.HwX 00:16:18.354 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.HwX 00:16:18.613 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.N3W ]] 00:16:18.613 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N3W 00:16:18.613 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.613 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.613 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.613 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N3W 00:16:18.613 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N3W 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lvX 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lvX 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lvX 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.w7M ]] 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.w7M 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.w7M 00:16:18.874 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.w7M 00:16:19.136 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:19.136 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.hNA 00:16:19.136 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.136 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.136 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.136 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.hNA 00:16:19.136 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.hNA 00:16:19.397 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:19.397 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:19.397 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.397 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.397 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.397 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.658 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.659 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.919 00:16:19.919 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.919 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.919 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.919 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.919 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.919 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.919 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.179 { 00:16:20.179 "cntlid": 1, 00:16:20.179 "qid": 0, 00:16:20.179 "state": "enabled", 00:16:20.179 "thread": "nvmf_tgt_poll_group_000", 00:16:20.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:20.179 "listen_address": { 00:16:20.179 "trtype": "TCP", 00:16:20.179 "adrfam": "IPv4", 00:16:20.179 "traddr": "10.0.0.2", 00:16:20.179 "trsvcid": "4420" 00:16:20.179 }, 00:16:20.179 "peer_address": { 00:16:20.179 "trtype": "TCP", 00:16:20.179 "adrfam": "IPv4", 00:16:20.179 "traddr": "10.0.0.1", 00:16:20.179 "trsvcid": "45684" 00:16:20.179 }, 00:16:20.179 "auth": { 00:16:20.179 "state": "completed", 00:16:20.179 "digest": "sha256", 00:16:20.179 "dhgroup": "null" 00:16:20.179 } 00:16:20.179 } 00:16:20.179 ]' 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.179 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.439 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:20.439 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:21.009 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.009 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:21.009 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.009 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.009 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.009 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.009 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.009 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.269 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:21.269 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.269 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.269 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.269 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.269 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.270 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.270 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.270 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.270 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.270 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.270 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.270 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.531 00:16:21.531 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.531 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.531 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.792 { 00:16:21.792 "cntlid": 3, 00:16:21.792 "qid": 0, 00:16:21.792 "state": "enabled", 00:16:21.792 "thread": "nvmf_tgt_poll_group_000", 00:16:21.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:21.792 "listen_address": { 00:16:21.792 "trtype": "TCP", 00:16:21.792 "adrfam": "IPv4", 00:16:21.792 "traddr": "10.0.0.2", 00:16:21.792 "trsvcid": "4420" 00:16:21.792 }, 00:16:21.792 "peer_address": { 00:16:21.792 "trtype": "TCP", 00:16:21.792 "adrfam": "IPv4", 00:16:21.792 "traddr": "10.0.0.1", 00:16:21.792 "trsvcid": "46148" 00:16:21.792 }, 00:16:21.792 "auth": { 00:16:21.792 "state": "completed", 00:16:21.792 "digest": "sha256", 00:16:21.792 "dhgroup": "null" 00:16:21.792 } 00:16:21.792 } 00:16:21.792 ]' 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.792 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.053 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:22.053 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:22.622 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.622 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:22.622 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.622 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.622 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.622 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.622 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.622 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.882 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:22.882 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.883 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.143 00:16:23.143 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.143 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.143 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.143 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.143 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.143 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.143 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.404 { 00:16:23.404 "cntlid": 5, 00:16:23.404 "qid": 0, 00:16:23.404 "state": "enabled", 00:16:23.404 "thread": "nvmf_tgt_poll_group_000", 00:16:23.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:23.404 "listen_address": { 00:16:23.404 "trtype": "TCP", 00:16:23.404 "adrfam": "IPv4", 00:16:23.404 "traddr": "10.0.0.2", 00:16:23.404 "trsvcid": "4420" 00:16:23.404 }, 00:16:23.404 "peer_address": { 00:16:23.404 "trtype": "TCP", 00:16:23.404 "adrfam": "IPv4", 00:16:23.404 "traddr": "10.0.0.1", 00:16:23.404 "trsvcid": "46172" 00:16:23.404 }, 00:16:23.404 "auth": { 00:16:23.404 "state": "completed", 00:16:23.404 "digest": "sha256", 00:16:23.404 "dhgroup": "null" 00:16:23.404 } 00:16:23.404 } 00:16:23.404 ]' 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.404 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.665 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:23.665 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:24.236 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.236 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:24.236 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.236 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.236 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.236 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.236 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.236 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.496 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.756 00:16:24.756 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.756 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.756 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.018 { 00:16:25.018 "cntlid": 7, 00:16:25.018 "qid": 0, 00:16:25.018 "state": "enabled", 00:16:25.018 "thread": "nvmf_tgt_poll_group_000", 00:16:25.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:25.018 "listen_address": { 00:16:25.018 "trtype": "TCP", 00:16:25.018 "adrfam": "IPv4", 00:16:25.018 "traddr": "10.0.0.2", 00:16:25.018 "trsvcid": "4420" 00:16:25.018 }, 00:16:25.018 "peer_address": { 00:16:25.018 "trtype": "TCP", 00:16:25.018 "adrfam": "IPv4", 00:16:25.018 "traddr": "10.0.0.1", 00:16:25.018 "trsvcid": "46200" 00:16:25.018 }, 00:16:25.018 "auth": { 00:16:25.018 "state": "completed", 00:16:25.018 "digest": "sha256", 00:16:25.018 "dhgroup": "null" 00:16:25.018 } 00:16:25.018 } 00:16:25.018 ]' 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.018 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.279 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:25.279 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:25.850 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.850 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:25.850 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.850 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.850 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.850 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.850 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.850 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.850 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.111 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.372 00:16:26.372 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.372 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.372 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.632 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.632 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.632 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.632 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.632 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.632 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.632 { 00:16:26.632 "cntlid": 9, 00:16:26.632 "qid": 0, 00:16:26.632 "state": "enabled", 00:16:26.632 "thread": "nvmf_tgt_poll_group_000", 00:16:26.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:26.632 "listen_address": { 00:16:26.632 "trtype": "TCP", 00:16:26.632 "adrfam": "IPv4", 00:16:26.632 "traddr": "10.0.0.2", 00:16:26.632 "trsvcid": "4420" 00:16:26.632 }, 00:16:26.632 "peer_address": { 00:16:26.632 "trtype": "TCP", 00:16:26.632 "adrfam": "IPv4", 00:16:26.632 "traddr": "10.0.0.1", 00:16:26.632 "trsvcid": "46222" 00:16:26.632 }, 00:16:26.632 "auth": { 00:16:26.632 "state": "completed", 00:16:26.632 "digest": "sha256", 00:16:26.632 "dhgroup": "ffdhe2048" 00:16:26.632 } 00:16:26.632 } 00:16:26.632 ]' 00:16:26.632 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.632 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.633 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.633 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.633 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.633 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.633 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.633 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.894 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:26.894 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:27.465 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.465 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:27.465 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.465 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.465 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.465 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.465 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.465 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.726 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.986 00:16:27.986 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.986 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.986 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.246 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.246 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.246 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.246 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.246 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.246 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.246 { 00:16:28.246 "cntlid": 11, 00:16:28.246 "qid": 0, 00:16:28.246 "state": "enabled", 00:16:28.246 "thread": "nvmf_tgt_poll_group_000", 00:16:28.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:28.246 "listen_address": { 00:16:28.246 "trtype": "TCP", 00:16:28.246 "adrfam": "IPv4", 00:16:28.246 "traddr": "10.0.0.2", 00:16:28.246 "trsvcid": "4420" 00:16:28.246 }, 00:16:28.246 "peer_address": { 00:16:28.246 "trtype": "TCP", 00:16:28.246 "adrfam": "IPv4", 00:16:28.246 "traddr": "10.0.0.1", 00:16:28.246 "trsvcid": "46260" 00:16:28.246 }, 00:16:28.246 "auth": { 00:16:28.246 "state": "completed", 00:16:28.246 "digest": "sha256", 00:16:28.246 "dhgroup": "ffdhe2048" 00:16:28.246 } 00:16:28.246 } 00:16:28.246 ]' 00:16:28.246 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.246 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.246 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.246 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.246 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.246 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.246 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.246 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.506 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:28.506 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:29.078 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.078 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:29.078 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.078 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.078 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.078 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.078 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.078 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.339 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.600 00:16:29.600 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.600 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.600 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.861 { 00:16:29.861 "cntlid": 13, 00:16:29.861 "qid": 0, 00:16:29.861 "state": "enabled", 00:16:29.861 "thread": "nvmf_tgt_poll_group_000", 00:16:29.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:29.861 "listen_address": { 00:16:29.861 "trtype": "TCP", 00:16:29.861 "adrfam": "IPv4", 00:16:29.861 "traddr": "10.0.0.2", 00:16:29.861 "trsvcid": "4420" 00:16:29.861 }, 00:16:29.861 "peer_address": { 00:16:29.861 "trtype": "TCP", 00:16:29.861 "adrfam": "IPv4", 00:16:29.861 "traddr": "10.0.0.1", 00:16:29.861 "trsvcid": "46290" 00:16:29.861 }, 00:16:29.861 "auth": { 00:16:29.861 "state": "completed", 00:16:29.861 "digest": "sha256", 00:16:29.861 "dhgroup": "ffdhe2048" 00:16:29.861 } 00:16:29.861 } 00:16:29.861 ]' 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.861 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.122 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:30.122 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:30.693 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.693 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:30.693 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.693 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.693 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.693 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.693 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.693 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.953 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:30.953 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.953 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.953 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.953 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.953 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.953 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:30.953 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.953 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.954 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.954 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.954 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.954 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.214 00:16:31.214 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.214 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.214 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.473 { 00:16:31.473 "cntlid": 15, 00:16:31.473 "qid": 0, 00:16:31.473 "state": "enabled", 00:16:31.473 "thread": "nvmf_tgt_poll_group_000", 00:16:31.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:31.473 "listen_address": { 00:16:31.473 "trtype": "TCP", 00:16:31.473 "adrfam": "IPv4", 00:16:31.473 "traddr": "10.0.0.2", 00:16:31.473 "trsvcid": "4420" 00:16:31.473 }, 00:16:31.473 "peer_address": { 00:16:31.473 "trtype": "TCP", 00:16:31.473 "adrfam": "IPv4", 00:16:31.473 "traddr": "10.0.0.1", 00:16:31.473 "trsvcid": "53482" 00:16:31.473 }, 00:16:31.473 "auth": { 00:16:31.473 "state": "completed", 00:16:31.473 "digest": "sha256", 00:16:31.473 "dhgroup": "ffdhe2048" 00:16:31.473 } 00:16:31.473 } 00:16:31.473 ]' 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.473 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.733 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:31.733 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:32.304 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.304 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:32.304 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.304 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.304 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.304 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.304 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.304 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.304 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.565 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.825 00:16:32.825 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.825 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.825 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.087 { 00:16:33.087 "cntlid": 17, 00:16:33.087 "qid": 0, 00:16:33.087 "state": "enabled", 00:16:33.087 "thread": "nvmf_tgt_poll_group_000", 00:16:33.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:33.087 "listen_address": { 00:16:33.087 "trtype": "TCP", 00:16:33.087 "adrfam": "IPv4", 00:16:33.087 "traddr": "10.0.0.2", 00:16:33.087 "trsvcid": "4420" 00:16:33.087 }, 00:16:33.087 "peer_address": { 00:16:33.087 "trtype": "TCP", 00:16:33.087 "adrfam": "IPv4", 00:16:33.087 "traddr": "10.0.0.1", 00:16:33.087 "trsvcid": "53510" 00:16:33.087 }, 00:16:33.087 "auth": { 00:16:33.087 "state": "completed", 00:16:33.087 "digest": "sha256", 00:16:33.087 "dhgroup": "ffdhe3072" 00:16:33.087 } 00:16:33.087 } 00:16:33.087 ]' 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.087 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.348 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:33.348 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:33.919 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.919 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:33.919 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.919 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.919 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.919 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.919 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.919 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.181 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.442 00:16:34.442 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.442 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.442 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.703 { 00:16:34.703 "cntlid": 19, 00:16:34.703 "qid": 0, 00:16:34.703 "state": "enabled", 00:16:34.703 "thread": "nvmf_tgt_poll_group_000", 00:16:34.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:34.703 "listen_address": { 00:16:34.703 "trtype": "TCP", 00:16:34.703 "adrfam": "IPv4", 00:16:34.703 "traddr": "10.0.0.2", 00:16:34.703 "trsvcid": "4420" 00:16:34.703 }, 00:16:34.703 "peer_address": { 00:16:34.703 "trtype": "TCP", 00:16:34.703 "adrfam": "IPv4", 00:16:34.703 "traddr": "10.0.0.1", 00:16:34.703 "trsvcid": "53534" 00:16:34.703 }, 00:16:34.703 "auth": { 00:16:34.703 "state": "completed", 00:16:34.703 "digest": "sha256", 00:16:34.703 "dhgroup": "ffdhe3072" 00:16:34.703 } 00:16:34.703 } 00:16:34.703 ]' 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.703 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.963 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:34.963 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:35.533 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.533 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:35.533 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.533 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.533 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.533 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.533 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.533 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.794 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.054 00:16:36.054 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.054 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.054 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.313 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.313 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.313 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.313 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.313 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.313 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.313 { 00:16:36.313 "cntlid": 21, 00:16:36.313 "qid": 0, 00:16:36.313 "state": "enabled", 00:16:36.313 "thread": "nvmf_tgt_poll_group_000", 00:16:36.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:36.313 "listen_address": { 00:16:36.313 "trtype": "TCP", 00:16:36.313 "adrfam": "IPv4", 00:16:36.313 "traddr": "10.0.0.2", 00:16:36.314 "trsvcid": "4420" 00:16:36.314 }, 00:16:36.314 "peer_address": { 00:16:36.314 "trtype": "TCP", 00:16:36.314 "adrfam": "IPv4", 00:16:36.314 "traddr": "10.0.0.1", 00:16:36.314 "trsvcid": "53558" 00:16:36.314 }, 00:16:36.314 "auth": { 00:16:36.314 "state": "completed", 00:16:36.314 "digest": "sha256", 00:16:36.314 "dhgroup": "ffdhe3072" 00:16:36.314 } 00:16:36.314 } 00:16:36.314 ]' 00:16:36.314 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.314 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.314 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.314 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.314 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.314 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.314 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.314 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.573 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:36.573 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:37.144 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.144 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:37.144 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.144 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.144 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.144 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.144 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.144 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.405 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.666 00:16:37.666 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.666 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.666 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.926 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.926 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.926 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.926 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.926 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.926 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.926 { 00:16:37.926 "cntlid": 23, 00:16:37.927 "qid": 0, 00:16:37.927 "state": "enabled", 00:16:37.927 "thread": "nvmf_tgt_poll_group_000", 00:16:37.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:37.927 "listen_address": { 00:16:37.927 "trtype": "TCP", 00:16:37.927 "adrfam": "IPv4", 00:16:37.927 "traddr": "10.0.0.2", 00:16:37.927 "trsvcid": "4420" 00:16:37.927 }, 00:16:37.927 "peer_address": { 00:16:37.927 "trtype": "TCP", 00:16:37.927 "adrfam": "IPv4", 00:16:37.927 "traddr": "10.0.0.1", 00:16:37.927 "trsvcid": "53584" 00:16:37.927 }, 00:16:37.927 "auth": { 00:16:37.927 "state": "completed", 00:16:37.927 "digest": "sha256", 00:16:37.927 "dhgroup": "ffdhe3072" 00:16:37.927 } 00:16:37.927 } 00:16:37.927 ]' 00:16:37.927 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.927 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.927 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.927 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.927 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.927 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.927 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.927 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:38.188 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:38.757 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.757 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:38.757 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.757 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.757 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.757 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.757 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.757 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.757 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.018 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.279 00:16:39.279 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.279 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.279 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.539 { 00:16:39.539 "cntlid": 25, 00:16:39.539 "qid": 0, 00:16:39.539 "state": "enabled", 00:16:39.539 "thread": "nvmf_tgt_poll_group_000", 00:16:39.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:39.539 "listen_address": { 00:16:39.539 "trtype": "TCP", 00:16:39.539 "adrfam": "IPv4", 00:16:39.539 "traddr": "10.0.0.2", 00:16:39.539 "trsvcid": "4420" 00:16:39.539 }, 00:16:39.539 "peer_address": { 00:16:39.539 "trtype": "TCP", 00:16:39.539 "adrfam": "IPv4", 00:16:39.539 "traddr": "10.0.0.1", 00:16:39.539 "trsvcid": "53624" 00:16:39.539 }, 00:16:39.539 "auth": { 00:16:39.539 "state": "completed", 00:16:39.539 "digest": "sha256", 00:16:39.539 "dhgroup": "ffdhe4096" 00:16:39.539 } 00:16:39.539 } 00:16:39.539 ]' 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.539 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.799 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:39.799 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:40.370 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.370 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:40.370 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.370 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.370 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.370 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.370 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.370 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.637 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.924 00:16:40.924 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.924 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.924 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.213 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.213 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.213 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.213 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.213 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.213 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.213 { 00:16:41.213 "cntlid": 27, 00:16:41.213 "qid": 0, 00:16:41.214 "state": "enabled", 00:16:41.214 "thread": "nvmf_tgt_poll_group_000", 00:16:41.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:41.214 "listen_address": { 00:16:41.214 "trtype": "TCP", 00:16:41.214 "adrfam": "IPv4", 00:16:41.214 "traddr": "10.0.0.2", 00:16:41.214 "trsvcid": "4420" 00:16:41.214 }, 00:16:41.214 "peer_address": { 00:16:41.214 "trtype": "TCP", 00:16:41.214 "adrfam": "IPv4", 00:16:41.214 "traddr": "10.0.0.1", 00:16:41.214 "trsvcid": "53642" 00:16:41.214 }, 00:16:41.214 "auth": { 00:16:41.214 "state": "completed", 00:16:41.214 "digest": "sha256", 00:16:41.214 "dhgroup": "ffdhe4096" 00:16:41.214 } 00:16:41.214 } 00:16:41.214 ]' 00:16:41.214 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.214 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.214 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.214 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.214 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.214 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.214 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.214 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.475 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:41.475 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:42.045 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.045 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:42.045 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.045 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.045 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.045 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.045 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.045 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.307 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.567 00:16:42.567 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.567 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.567 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.828 { 00:16:42.828 "cntlid": 29, 00:16:42.828 "qid": 0, 00:16:42.828 "state": "enabled", 00:16:42.828 "thread": "nvmf_tgt_poll_group_000", 00:16:42.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:42.828 "listen_address": { 00:16:42.828 "trtype": "TCP", 00:16:42.828 "adrfam": "IPv4", 00:16:42.828 "traddr": "10.0.0.2", 00:16:42.828 "trsvcid": "4420" 00:16:42.828 }, 00:16:42.828 "peer_address": { 00:16:42.828 "trtype": "TCP", 00:16:42.828 "adrfam": "IPv4", 00:16:42.828 "traddr": "10.0.0.1", 00:16:42.828 "trsvcid": "48040" 00:16:42.828 }, 00:16:42.828 "auth": { 00:16:42.828 "state": "completed", 00:16:42.828 "digest": "sha256", 00:16:42.828 "dhgroup": "ffdhe4096" 00:16:42.828 } 00:16:42.828 } 00:16:42.828 ]' 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.828 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.089 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:43.089 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:43.660 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.660 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:43.660 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.660 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.660 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.660 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.660 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.660 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.920 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:43.920 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.921 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.279 00:16:44.279 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.279 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.279 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.279 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.279 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.280 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.280 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.280 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.280 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.280 { 00:16:44.280 "cntlid": 31, 00:16:44.280 "qid": 0, 00:16:44.280 "state": "enabled", 00:16:44.280 "thread": "nvmf_tgt_poll_group_000", 00:16:44.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:44.280 "listen_address": { 00:16:44.280 "trtype": "TCP", 00:16:44.280 "adrfam": "IPv4", 00:16:44.280 "traddr": "10.0.0.2", 00:16:44.280 "trsvcid": "4420" 00:16:44.280 }, 00:16:44.280 "peer_address": { 00:16:44.280 "trtype": "TCP", 00:16:44.280 "adrfam": "IPv4", 00:16:44.280 "traddr": "10.0.0.1", 00:16:44.280 "trsvcid": "48080" 00:16:44.280 }, 00:16:44.280 "auth": { 00:16:44.280 "state": "completed", 00:16:44.280 "digest": "sha256", 00:16:44.280 "dhgroup": "ffdhe4096" 00:16:44.280 } 00:16:44.280 } 00:16:44.280 ]' 00:16:44.280 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.540 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.540 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.540 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.540 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.540 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.540 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.540 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.540 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:44.540 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.480 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.481 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.481 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.481 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.481 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.741 00:16:45.741 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.741 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.741 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.002 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.002 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.002 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.002 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.002 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.002 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.002 { 00:16:46.002 "cntlid": 33, 00:16:46.002 "qid": 0, 00:16:46.002 "state": "enabled", 00:16:46.002 "thread": "nvmf_tgt_poll_group_000", 00:16:46.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:46.002 "listen_address": { 00:16:46.002 "trtype": "TCP", 00:16:46.002 "adrfam": "IPv4", 00:16:46.002 "traddr": "10.0.0.2", 00:16:46.002 "trsvcid": "4420" 00:16:46.002 }, 00:16:46.002 "peer_address": { 00:16:46.002 "trtype": "TCP", 00:16:46.002 "adrfam": "IPv4", 00:16:46.002 "traddr": "10.0.0.1", 00:16:46.002 "trsvcid": "48108" 00:16:46.002 }, 00:16:46.002 "auth": { 00:16:46.002 "state": "completed", 00:16:46.002 "digest": "sha256", 00:16:46.002 "dhgroup": "ffdhe6144" 00:16:46.002 } 00:16:46.002 } 00:16:46.002 ]' 00:16:46.002 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.002 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.002 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.262 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.262 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.262 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.262 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.262 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.262 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:46.262 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:47.207 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.207 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:47.207 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.207 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.207 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.207 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.207 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.207 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.207 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:47.207 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.207 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.207 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.208 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.208 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.208 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.208 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.208 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.208 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.208 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.208 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.208 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.469 00:16:47.469 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.469 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.469 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.729 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.730 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.730 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.730 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.730 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.730 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.730 { 00:16:47.730 "cntlid": 35, 00:16:47.730 "qid": 0, 00:16:47.730 "state": "enabled", 00:16:47.730 "thread": "nvmf_tgt_poll_group_000", 00:16:47.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:47.730 "listen_address": { 00:16:47.730 "trtype": "TCP", 00:16:47.730 "adrfam": "IPv4", 00:16:47.730 "traddr": "10.0.0.2", 00:16:47.730 "trsvcid": "4420" 00:16:47.730 }, 00:16:47.730 "peer_address": { 00:16:47.730 "trtype": "TCP", 00:16:47.730 "adrfam": "IPv4", 00:16:47.730 "traddr": "10.0.0.1", 00:16:47.730 "trsvcid": "48122" 00:16:47.730 }, 00:16:47.730 "auth": { 00:16:47.730 "state": "completed", 00:16:47.730 "digest": "sha256", 00:16:47.730 "dhgroup": "ffdhe6144" 00:16:47.730 } 00:16:47.730 } 00:16:47.730 ]' 00:16:47.730 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.730 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.730 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.990 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.990 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.990 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.990 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.990 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.250 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:48.250 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:48.822 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.822 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:48.822 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.822 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.822 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.822 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.822 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.822 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.083 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.346 00:16:49.346 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.346 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.346 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.606 { 00:16:49.606 "cntlid": 37, 00:16:49.606 "qid": 0, 00:16:49.606 "state": "enabled", 00:16:49.606 "thread": "nvmf_tgt_poll_group_000", 00:16:49.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:49.606 "listen_address": { 00:16:49.606 "trtype": "TCP", 00:16:49.606 "adrfam": "IPv4", 00:16:49.606 "traddr": "10.0.0.2", 00:16:49.606 "trsvcid": "4420" 00:16:49.606 }, 00:16:49.606 "peer_address": { 00:16:49.606 "trtype": "TCP", 00:16:49.606 "adrfam": "IPv4", 00:16:49.606 "traddr": "10.0.0.1", 00:16:49.606 "trsvcid": "48142" 00:16:49.606 }, 00:16:49.606 "auth": { 00:16:49.606 "state": "completed", 00:16:49.606 "digest": "sha256", 00:16:49.606 "dhgroup": "ffdhe6144" 00:16:49.606 } 00:16:49.606 } 00:16:49.606 ]' 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.606 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.867 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:49.867 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:50.438 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.438 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:50.438 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.438 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.438 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.438 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.438 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.438 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.698 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.699 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.699 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.699 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.960 00:16:50.960 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.960 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.960 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.221 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.221 { 00:16:51.221 "cntlid": 39, 00:16:51.221 "qid": 0, 00:16:51.221 "state": "enabled", 00:16:51.221 "thread": "nvmf_tgt_poll_group_000", 00:16:51.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:51.221 "listen_address": { 00:16:51.221 "trtype": "TCP", 00:16:51.221 "adrfam": "IPv4", 00:16:51.221 "traddr": "10.0.0.2", 00:16:51.221 "trsvcid": "4420" 00:16:51.221 }, 00:16:51.221 "peer_address": { 00:16:51.221 "trtype": "TCP", 00:16:51.221 "adrfam": "IPv4", 00:16:51.221 "traddr": "10.0.0.1", 00:16:51.221 "trsvcid": "48184" 00:16:51.221 }, 00:16:51.221 "auth": { 00:16:51.221 "state": "completed", 00:16:51.221 "digest": "sha256", 00:16:51.221 "dhgroup": "ffdhe6144" 00:16:51.221 } 00:16:51.221 } 00:16:51.221 ]' 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.221 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.481 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.481 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.481 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.481 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:51.481 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:52.422 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.422 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:52.422 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.422 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.422 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.422 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.422 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.422 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.423 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.423 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.993 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.993 { 00:16:52.993 "cntlid": 41, 00:16:52.993 "qid": 0, 00:16:52.993 "state": "enabled", 00:16:52.993 "thread": "nvmf_tgt_poll_group_000", 00:16:52.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:52.993 "listen_address": { 00:16:52.993 "trtype": "TCP", 00:16:52.993 "adrfam": "IPv4", 00:16:52.993 "traddr": "10.0.0.2", 00:16:52.993 "trsvcid": "4420" 00:16:52.993 }, 00:16:52.993 "peer_address": { 00:16:52.993 "trtype": "TCP", 00:16:52.993 "adrfam": "IPv4", 00:16:52.993 "traddr": "10.0.0.1", 00:16:52.993 "trsvcid": "36406" 00:16:52.993 }, 00:16:52.993 "auth": { 00:16:52.993 "state": "completed", 00:16:52.993 "digest": "sha256", 00:16:52.993 "dhgroup": "ffdhe8192" 00:16:52.993 } 00:16:52.993 } 00:16:52.993 ]' 00:16:52.993 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.254 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.254 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.254 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.254 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.254 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.254 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.254 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.514 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:53.514 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:16:54.085 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.085 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:54.085 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.085 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.085 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.085 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.085 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.085 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.345 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.606 00:16:54.866 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.866 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.866 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.866 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.866 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.866 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.866 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.866 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.866 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.866 { 00:16:54.866 "cntlid": 43, 00:16:54.866 "qid": 0, 00:16:54.866 "state": "enabled", 00:16:54.866 "thread": "nvmf_tgt_poll_group_000", 00:16:54.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:54.867 "listen_address": { 00:16:54.867 "trtype": "TCP", 00:16:54.867 "adrfam": "IPv4", 00:16:54.867 "traddr": "10.0.0.2", 00:16:54.867 "trsvcid": "4420" 00:16:54.867 }, 00:16:54.867 "peer_address": { 00:16:54.867 "trtype": "TCP", 00:16:54.867 "adrfam": "IPv4", 00:16:54.867 "traddr": "10.0.0.1", 00:16:54.867 "trsvcid": "36430" 00:16:54.867 }, 00:16:54.867 "auth": { 00:16:54.867 "state": "completed", 00:16:54.867 "digest": "sha256", 00:16:54.867 "dhgroup": "ffdhe8192" 00:16:54.867 } 00:16:54.867 } 00:16:54.867 ]' 00:16:54.867 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.127 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.127 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.127 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.127 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.127 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.127 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.127 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.386 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:55.387 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:16:55.957 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.957 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:55.957 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.957 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.957 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.957 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.957 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.957 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.217 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.788 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.788 { 00:16:56.788 "cntlid": 45, 00:16:56.788 "qid": 0, 00:16:56.788 "state": "enabled", 00:16:56.788 "thread": "nvmf_tgt_poll_group_000", 00:16:56.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:56.788 "listen_address": { 00:16:56.788 "trtype": "TCP", 00:16:56.788 "adrfam": "IPv4", 00:16:56.788 "traddr": "10.0.0.2", 00:16:56.788 "trsvcid": "4420" 00:16:56.788 }, 00:16:56.788 "peer_address": { 00:16:56.788 "trtype": "TCP", 00:16:56.788 "adrfam": "IPv4", 00:16:56.788 "traddr": "10.0.0.1", 00:16:56.788 "trsvcid": "36454" 00:16:56.788 }, 00:16:56.788 "auth": { 00:16:56.788 "state": "completed", 00:16:56.788 "digest": "sha256", 00:16:56.788 "dhgroup": "ffdhe8192" 00:16:56.788 } 00:16:56.788 } 00:16:56.788 ]' 00:16:56.788 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.048 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.048 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.048 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.048 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.048 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.048 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.048 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.309 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:57.309 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:16:57.880 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.880 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:57.880 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.880 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.880 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.880 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.880 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.880 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.141 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.712 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.712 { 00:16:58.712 "cntlid": 47, 00:16:58.712 "qid": 0, 00:16:58.712 "state": "enabled", 00:16:58.712 "thread": "nvmf_tgt_poll_group_000", 00:16:58.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:58.712 "listen_address": { 00:16:58.712 "trtype": "TCP", 00:16:58.712 "adrfam": "IPv4", 00:16:58.712 "traddr": "10.0.0.2", 00:16:58.712 "trsvcid": "4420" 00:16:58.712 }, 00:16:58.712 "peer_address": { 00:16:58.712 "trtype": "TCP", 00:16:58.712 "adrfam": "IPv4", 00:16:58.712 "traddr": "10.0.0.1", 00:16:58.712 "trsvcid": "36474" 00:16:58.712 }, 00:16:58.712 "auth": { 00:16:58.712 "state": "completed", 00:16:58.712 "digest": "sha256", 00:16:58.712 "dhgroup": "ffdhe8192" 00:16:58.712 } 00:16:58.712 } 00:16:58.712 ]' 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.712 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.972 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.972 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.972 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.972 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.972 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.972 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:58.972 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.912 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.913 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.913 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.913 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.913 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.913 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.913 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.913 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.173 00:17:00.173 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.173 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.173 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.433 { 00:17:00.433 "cntlid": 49, 00:17:00.433 "qid": 0, 00:17:00.433 "state": "enabled", 00:17:00.433 "thread": "nvmf_tgt_poll_group_000", 00:17:00.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:00.433 "listen_address": { 00:17:00.433 "trtype": "TCP", 00:17:00.433 "adrfam": "IPv4", 00:17:00.433 "traddr": "10.0.0.2", 00:17:00.433 "trsvcid": "4420" 00:17:00.433 }, 00:17:00.433 "peer_address": { 00:17:00.433 "trtype": "TCP", 00:17:00.433 "adrfam": "IPv4", 00:17:00.433 "traddr": "10.0.0.1", 00:17:00.433 "trsvcid": "36496" 00:17:00.433 }, 00:17:00.433 "auth": { 00:17:00.433 "state": "completed", 00:17:00.433 "digest": "sha384", 00:17:00.433 "dhgroup": "null" 00:17:00.433 } 00:17:00.433 } 00:17:00.433 ]' 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:00.433 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.434 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.434 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.434 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.694 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:00.694 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:01.265 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.265 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:01.265 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.265 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.265 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.265 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.265 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.265 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.530 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:01.530 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.530 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.530 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.530 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.530 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.530 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.531 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.531 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.531 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.531 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.531 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.531 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.790 00:17:01.790 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.790 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.790 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.051 { 00:17:02.051 "cntlid": 51, 00:17:02.051 "qid": 0, 00:17:02.051 "state": "enabled", 00:17:02.051 "thread": "nvmf_tgt_poll_group_000", 00:17:02.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:02.051 "listen_address": { 00:17:02.051 "trtype": "TCP", 00:17:02.051 "adrfam": "IPv4", 00:17:02.051 "traddr": "10.0.0.2", 00:17:02.051 "trsvcid": "4420" 00:17:02.051 }, 00:17:02.051 "peer_address": { 00:17:02.051 "trtype": "TCP", 00:17:02.051 "adrfam": "IPv4", 00:17:02.051 "traddr": "10.0.0.1", 00:17:02.051 "trsvcid": "49472" 00:17:02.051 }, 00:17:02.051 "auth": { 00:17:02.051 "state": "completed", 00:17:02.051 "digest": "sha384", 00:17:02.051 "dhgroup": "null" 00:17:02.051 } 00:17:02.051 } 00:17:02.051 ]' 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.051 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.311 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:02.311 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:02.881 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.881 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:02.881 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.881 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.881 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.881 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.881 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.881 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.142 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.402 00:17:03.402 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.402 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.402 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.402 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.402 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.402 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.402 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.662 { 00:17:03.662 "cntlid": 53, 00:17:03.662 "qid": 0, 00:17:03.662 "state": "enabled", 00:17:03.662 "thread": "nvmf_tgt_poll_group_000", 00:17:03.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:03.662 "listen_address": { 00:17:03.662 "trtype": "TCP", 00:17:03.662 "adrfam": "IPv4", 00:17:03.662 "traddr": "10.0.0.2", 00:17:03.662 "trsvcid": "4420" 00:17:03.662 }, 00:17:03.662 "peer_address": { 00:17:03.662 "trtype": "TCP", 00:17:03.662 "adrfam": "IPv4", 00:17:03.662 "traddr": "10.0.0.1", 00:17:03.662 "trsvcid": "49506" 00:17:03.662 }, 00:17:03.662 "auth": { 00:17:03.662 "state": "completed", 00:17:03.662 "digest": "sha384", 00:17:03.662 "dhgroup": "null" 00:17:03.662 } 00:17:03.662 } 00:17:03.662 ]' 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.662 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.922 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:03.922 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:04.493 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.493 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:04.493 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.493 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.493 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.493 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.493 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.493 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.753 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.754 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.014 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.014 { 00:17:05.014 "cntlid": 55, 00:17:05.014 "qid": 0, 00:17:05.014 "state": "enabled", 00:17:05.014 "thread": "nvmf_tgt_poll_group_000", 00:17:05.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:05.014 "listen_address": { 00:17:05.014 "trtype": "TCP", 00:17:05.014 "adrfam": "IPv4", 00:17:05.014 "traddr": "10.0.0.2", 00:17:05.014 "trsvcid": "4420" 00:17:05.014 }, 00:17:05.014 "peer_address": { 00:17:05.014 "trtype": "TCP", 00:17:05.014 "adrfam": "IPv4", 00:17:05.014 "traddr": "10.0.0.1", 00:17:05.014 "trsvcid": "49532" 00:17:05.014 }, 00:17:05.014 "auth": { 00:17:05.014 "state": "completed", 00:17:05.014 "digest": "sha384", 00:17:05.014 "dhgroup": "null" 00:17:05.014 } 00:17:05.014 } 00:17:05.014 ]' 00:17:05.014 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.274 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.274 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.274 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.274 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.274 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.274 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.274 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.535 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:05.535 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:06.105 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.105 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:06.105 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.105 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.105 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.105 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.105 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.105 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.105 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.367 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.627 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.627 { 00:17:06.627 "cntlid": 57, 00:17:06.627 "qid": 0, 00:17:06.627 "state": "enabled", 00:17:06.627 "thread": "nvmf_tgt_poll_group_000", 00:17:06.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:06.627 "listen_address": { 00:17:06.627 "trtype": "TCP", 00:17:06.627 "adrfam": "IPv4", 00:17:06.627 "traddr": "10.0.0.2", 00:17:06.627 "trsvcid": "4420" 00:17:06.627 }, 00:17:06.627 "peer_address": { 00:17:06.627 "trtype": "TCP", 00:17:06.627 "adrfam": "IPv4", 00:17:06.627 "traddr": "10.0.0.1", 00:17:06.627 "trsvcid": "49554" 00:17:06.627 }, 00:17:06.627 "auth": { 00:17:06.627 "state": "completed", 00:17:06.627 "digest": "sha384", 00:17:06.627 "dhgroup": "ffdhe2048" 00:17:06.627 } 00:17:06.627 } 00:17:06.627 ]' 00:17:06.627 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.888 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.888 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.888 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.888 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.888 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.888 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.888 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.149 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:07.149 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:07.719 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.719 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:07.719 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.719 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.719 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.719 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.719 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.719 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.978 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:07.978 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.979 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.239 00:17:08.239 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.239 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.239 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.239 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.239 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.239 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.239 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.239 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.239 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.239 { 00:17:08.239 "cntlid": 59, 00:17:08.239 "qid": 0, 00:17:08.239 "state": "enabled", 00:17:08.239 "thread": "nvmf_tgt_poll_group_000", 00:17:08.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:08.239 "listen_address": { 00:17:08.239 "trtype": "TCP", 00:17:08.239 "adrfam": "IPv4", 00:17:08.239 "traddr": "10.0.0.2", 00:17:08.239 "trsvcid": "4420" 00:17:08.239 }, 00:17:08.239 "peer_address": { 00:17:08.239 "trtype": "TCP", 00:17:08.239 "adrfam": "IPv4", 00:17:08.239 "traddr": "10.0.0.1", 00:17:08.239 "trsvcid": "49576" 00:17:08.239 }, 00:17:08.239 "auth": { 00:17:08.239 "state": "completed", 00:17:08.239 "digest": "sha384", 00:17:08.239 "dhgroup": "ffdhe2048" 00:17:08.239 } 00:17:08.239 } 00:17:08.239 ]' 00:17:08.239 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.499 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.499 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.499 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.499 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.499 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.499 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.499 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.760 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:08.760 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:09.329 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.329 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:09.329 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.329 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.329 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.329 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.329 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.329 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.589 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:09.589 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.589 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.589 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.590 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.590 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.590 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.590 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.590 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.590 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.590 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.590 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.590 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.850 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.850 { 00:17:09.850 "cntlid": 61, 00:17:09.850 "qid": 0, 00:17:09.850 "state": "enabled", 00:17:09.850 "thread": "nvmf_tgt_poll_group_000", 00:17:09.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:09.850 "listen_address": { 00:17:09.850 "trtype": "TCP", 00:17:09.850 "adrfam": "IPv4", 00:17:09.850 "traddr": "10.0.0.2", 00:17:09.850 "trsvcid": "4420" 00:17:09.850 }, 00:17:09.850 "peer_address": { 00:17:09.850 "trtype": "TCP", 00:17:09.850 "adrfam": "IPv4", 00:17:09.850 "traddr": "10.0.0.1", 00:17:09.850 "trsvcid": "49610" 00:17:09.850 }, 00:17:09.850 "auth": { 00:17:09.850 "state": "completed", 00:17:09.850 "digest": "sha384", 00:17:09.850 "dhgroup": "ffdhe2048" 00:17:09.850 } 00:17:09.850 } 00:17:09.850 ]' 00:17:09.850 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.111 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.111 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.111 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.111 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.111 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.111 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.111 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.371 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:10.371 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:10.942 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.942 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:10.942 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.942 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.942 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.942 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.942 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.942 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.202 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.462 00:17:11.462 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.462 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.462 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.462 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.462 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.462 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.462 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.462 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.462 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.462 { 00:17:11.462 "cntlid": 63, 00:17:11.462 "qid": 0, 00:17:11.462 "state": "enabled", 00:17:11.462 "thread": "nvmf_tgt_poll_group_000", 00:17:11.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:11.462 "listen_address": { 00:17:11.462 "trtype": "TCP", 00:17:11.462 "adrfam": "IPv4", 00:17:11.462 "traddr": "10.0.0.2", 00:17:11.462 "trsvcid": "4420" 00:17:11.462 }, 00:17:11.462 "peer_address": { 00:17:11.462 "trtype": "TCP", 00:17:11.462 "adrfam": "IPv4", 00:17:11.462 "traddr": "10.0.0.1", 00:17:11.462 "trsvcid": "60552" 00:17:11.462 }, 00:17:11.462 "auth": { 00:17:11.462 "state": "completed", 00:17:11.462 "digest": "sha384", 00:17:11.462 "dhgroup": "ffdhe2048" 00:17:11.462 } 00:17:11.463 } 00:17:11.463 ]' 00:17:11.463 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.463 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.723 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.723 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.723 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.723 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.723 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.723 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.983 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:11.983 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:12.553 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.553 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:12.553 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.553 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.553 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.553 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.553 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.553 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.553 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.814 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.814 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.075 { 00:17:13.075 "cntlid": 65, 00:17:13.075 "qid": 0, 00:17:13.075 "state": "enabled", 00:17:13.075 "thread": "nvmf_tgt_poll_group_000", 00:17:13.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:13.075 "listen_address": { 00:17:13.075 "trtype": "TCP", 00:17:13.075 "adrfam": "IPv4", 00:17:13.075 "traddr": "10.0.0.2", 00:17:13.075 "trsvcid": "4420" 00:17:13.075 }, 00:17:13.075 "peer_address": { 00:17:13.075 "trtype": "TCP", 00:17:13.075 "adrfam": "IPv4", 00:17:13.075 "traddr": "10.0.0.1", 00:17:13.075 "trsvcid": "60586" 00:17:13.075 }, 00:17:13.075 "auth": { 00:17:13.075 "state": "completed", 00:17:13.075 "digest": "sha384", 00:17:13.075 "dhgroup": "ffdhe3072" 00:17:13.075 } 00:17:13.075 } 00:17:13.075 ]' 00:17:13.075 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.335 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.335 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.335 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.335 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.335 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.335 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.335 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.597 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:13.597 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:14.168 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.169 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:14.169 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.169 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.169 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.169 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.169 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.169 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.429 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.429 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.690 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.690 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.690 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.690 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.690 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.690 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.690 { 00:17:14.690 "cntlid": 67, 00:17:14.690 "qid": 0, 00:17:14.690 "state": "enabled", 00:17:14.690 "thread": "nvmf_tgt_poll_group_000", 00:17:14.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:14.690 "listen_address": { 00:17:14.690 "trtype": "TCP", 00:17:14.690 "adrfam": "IPv4", 00:17:14.690 "traddr": "10.0.0.2", 00:17:14.690 "trsvcid": "4420" 00:17:14.690 }, 00:17:14.690 "peer_address": { 00:17:14.690 "trtype": "TCP", 00:17:14.690 "adrfam": "IPv4", 00:17:14.690 "traddr": "10.0.0.1", 00:17:14.690 "trsvcid": "60620" 00:17:14.690 }, 00:17:14.690 "auth": { 00:17:14.690 "state": "completed", 00:17:14.690 "digest": "sha384", 00:17:14.690 "dhgroup": "ffdhe3072" 00:17:14.690 } 00:17:14.690 } 00:17:14.690 ]' 00:17:14.690 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.690 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.690 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.951 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.951 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.951 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.951 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.951 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.951 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:14.951 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.893 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.154 00:17:16.154 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.154 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.154 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.415 { 00:17:16.415 "cntlid": 69, 00:17:16.415 "qid": 0, 00:17:16.415 "state": "enabled", 00:17:16.415 "thread": "nvmf_tgt_poll_group_000", 00:17:16.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:16.415 "listen_address": { 00:17:16.415 "trtype": "TCP", 00:17:16.415 "adrfam": "IPv4", 00:17:16.415 "traddr": "10.0.0.2", 00:17:16.415 "trsvcid": "4420" 00:17:16.415 }, 00:17:16.415 "peer_address": { 00:17:16.415 "trtype": "TCP", 00:17:16.415 "adrfam": "IPv4", 00:17:16.415 "traddr": "10.0.0.1", 00:17:16.415 "trsvcid": "60644" 00:17:16.415 }, 00:17:16.415 "auth": { 00:17:16.415 "state": "completed", 00:17:16.415 "digest": "sha384", 00:17:16.415 "dhgroup": "ffdhe3072" 00:17:16.415 } 00:17:16.415 } 00:17:16.415 ]' 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.415 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.676 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:16.676 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:17.247 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.247 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:17.247 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.247 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.247 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.247 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.247 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.247 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.508 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.768 00:17:17.768 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.768 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.768 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.027 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.027 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.027 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.027 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.027 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.027 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.027 { 00:17:18.027 "cntlid": 71, 00:17:18.027 "qid": 0, 00:17:18.027 "state": "enabled", 00:17:18.027 "thread": "nvmf_tgt_poll_group_000", 00:17:18.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:18.027 "listen_address": { 00:17:18.027 "trtype": "TCP", 00:17:18.027 "adrfam": "IPv4", 00:17:18.027 "traddr": "10.0.0.2", 00:17:18.027 "trsvcid": "4420" 00:17:18.027 }, 00:17:18.027 "peer_address": { 00:17:18.027 "trtype": "TCP", 00:17:18.027 "adrfam": "IPv4", 00:17:18.027 "traddr": "10.0.0.1", 00:17:18.027 "trsvcid": "60678" 00:17:18.027 }, 00:17:18.027 "auth": { 00:17:18.027 "state": "completed", 00:17:18.027 "digest": "sha384", 00:17:18.028 "dhgroup": "ffdhe3072" 00:17:18.028 } 00:17:18.028 } 00:17:18.028 ]' 00:17:18.028 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.028 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.028 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.028 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.028 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.028 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.028 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.028 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.287 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:18.287 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:18.859 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.859 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:18.859 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.859 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.859 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.859 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.859 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.859 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.859 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.119 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.380 00:17:19.380 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.380 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.380 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.640 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.640 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.640 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.640 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.640 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.640 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.640 { 00:17:19.640 "cntlid": 73, 00:17:19.640 "qid": 0, 00:17:19.640 "state": "enabled", 00:17:19.640 "thread": "nvmf_tgt_poll_group_000", 00:17:19.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:19.640 "listen_address": { 00:17:19.640 "trtype": "TCP", 00:17:19.640 "adrfam": "IPv4", 00:17:19.640 "traddr": "10.0.0.2", 00:17:19.640 "trsvcid": "4420" 00:17:19.640 }, 00:17:19.640 "peer_address": { 00:17:19.640 "trtype": "TCP", 00:17:19.640 "adrfam": "IPv4", 00:17:19.640 "traddr": "10.0.0.1", 00:17:19.640 "trsvcid": "60710" 00:17:19.640 }, 00:17:19.640 "auth": { 00:17:19.640 "state": "completed", 00:17:19.640 "digest": "sha384", 00:17:19.640 "dhgroup": "ffdhe4096" 00:17:19.640 } 00:17:19.640 } 00:17:19.640 ]' 00:17:19.640 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.640 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.640 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.641 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.641 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.641 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.641 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.641 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.901 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:19.901 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:20.471 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.731 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.732 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.732 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.732 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.732 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.732 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.732 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.732 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.732 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.732 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.992 00:17:20.992 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.992 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.992 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.253 { 00:17:21.253 "cntlid": 75, 00:17:21.253 "qid": 0, 00:17:21.253 "state": "enabled", 00:17:21.253 "thread": "nvmf_tgt_poll_group_000", 00:17:21.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:21.253 "listen_address": { 00:17:21.253 "trtype": "TCP", 00:17:21.253 "adrfam": "IPv4", 00:17:21.253 "traddr": "10.0.0.2", 00:17:21.253 "trsvcid": "4420" 00:17:21.253 }, 00:17:21.253 "peer_address": { 00:17:21.253 "trtype": "TCP", 00:17:21.253 "adrfam": "IPv4", 00:17:21.253 "traddr": "10.0.0.1", 00:17:21.253 "trsvcid": "44002" 00:17:21.253 }, 00:17:21.253 "auth": { 00:17:21.253 "state": "completed", 00:17:21.253 "digest": "sha384", 00:17:21.253 "dhgroup": "ffdhe4096" 00:17:21.253 } 00:17:21.253 } 00:17:21.253 ]' 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.253 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.514 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.514 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.514 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.514 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:21.514 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:22.455 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.455 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.715 00:17:22.715 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.715 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.715 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.975 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.975 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.975 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.975 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.975 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.975 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.975 { 00:17:22.975 "cntlid": 77, 00:17:22.975 "qid": 0, 00:17:22.975 "state": "enabled", 00:17:22.975 "thread": "nvmf_tgt_poll_group_000", 00:17:22.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:22.975 "listen_address": { 00:17:22.975 "trtype": "TCP", 00:17:22.975 "adrfam": "IPv4", 00:17:22.975 "traddr": "10.0.0.2", 00:17:22.975 "trsvcid": "4420" 00:17:22.975 }, 00:17:22.975 "peer_address": { 00:17:22.975 "trtype": "TCP", 00:17:22.975 "adrfam": "IPv4", 00:17:22.975 "traddr": "10.0.0.1", 00:17:22.975 "trsvcid": "44036" 00:17:22.975 }, 00:17:22.975 "auth": { 00:17:22.975 "state": "completed", 00:17:22.975 "digest": "sha384", 00:17:22.975 "dhgroup": "ffdhe4096" 00:17:22.975 } 00:17:22.975 } 00:17:22.975 ]' 00:17:22.975 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.975 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.975 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.976 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.976 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.976 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.976 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.976 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.236 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:23.236 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:23.806 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.806 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:23.806 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.806 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.806 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.806 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.806 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.806 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.067 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.328 00:17:24.328 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.328 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.328 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.589 { 00:17:24.589 "cntlid": 79, 00:17:24.589 "qid": 0, 00:17:24.589 "state": "enabled", 00:17:24.589 "thread": "nvmf_tgt_poll_group_000", 00:17:24.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:24.589 "listen_address": { 00:17:24.589 "trtype": "TCP", 00:17:24.589 "adrfam": "IPv4", 00:17:24.589 "traddr": "10.0.0.2", 00:17:24.589 "trsvcid": "4420" 00:17:24.589 }, 00:17:24.589 "peer_address": { 00:17:24.589 "trtype": "TCP", 00:17:24.589 "adrfam": "IPv4", 00:17:24.589 "traddr": "10.0.0.1", 00:17:24.589 "trsvcid": "44050" 00:17:24.589 }, 00:17:24.589 "auth": { 00:17:24.589 "state": "completed", 00:17:24.589 "digest": "sha384", 00:17:24.589 "dhgroup": "ffdhe4096" 00:17:24.589 } 00:17:24.589 } 00:17:24.589 ]' 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.589 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.851 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:24.851 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:25.423 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.423 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:25.423 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.423 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.684 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.945 00:17:26.206 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.206 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.206 13:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.206 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.206 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.206 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.206 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.206 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.206 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.206 { 00:17:26.206 "cntlid": 81, 00:17:26.206 "qid": 0, 00:17:26.206 "state": "enabled", 00:17:26.206 "thread": "nvmf_tgt_poll_group_000", 00:17:26.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:26.206 "listen_address": { 00:17:26.206 "trtype": "TCP", 00:17:26.206 "adrfam": "IPv4", 00:17:26.206 "traddr": "10.0.0.2", 00:17:26.206 "trsvcid": "4420" 00:17:26.206 }, 00:17:26.206 "peer_address": { 00:17:26.206 "trtype": "TCP", 00:17:26.206 "adrfam": "IPv4", 00:17:26.206 "traddr": "10.0.0.1", 00:17:26.206 "trsvcid": "44064" 00:17:26.206 }, 00:17:26.206 "auth": { 00:17:26.206 "state": "completed", 00:17:26.206 "digest": "sha384", 00:17:26.206 "dhgroup": "ffdhe6144" 00:17:26.206 } 00:17:26.206 } 00:17:26.206 ]' 00:17:26.206 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.206 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.206 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.468 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.468 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.468 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.468 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.468 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.468 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:26.468 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:27.410 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.410 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.671 00:17:27.671 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.671 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.671 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.931 { 00:17:27.931 "cntlid": 83, 00:17:27.931 "qid": 0, 00:17:27.931 "state": "enabled", 00:17:27.931 "thread": "nvmf_tgt_poll_group_000", 00:17:27.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:27.931 "listen_address": { 00:17:27.931 "trtype": "TCP", 00:17:27.931 "adrfam": "IPv4", 00:17:27.931 "traddr": "10.0.0.2", 00:17:27.931 "trsvcid": "4420" 00:17:27.931 }, 00:17:27.931 "peer_address": { 00:17:27.931 "trtype": "TCP", 00:17:27.931 "adrfam": "IPv4", 00:17:27.931 "traddr": "10.0.0.1", 00:17:27.931 "trsvcid": "44096" 00:17:27.931 }, 00:17:27.931 "auth": { 00:17:27.931 "state": "completed", 00:17:27.931 "digest": "sha384", 00:17:27.931 "dhgroup": "ffdhe6144" 00:17:27.931 } 00:17:27.931 } 00:17:27.931 ]' 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.931 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.191 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.191 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.191 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.191 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:28.192 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.132 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.393 00:17:29.393 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.393 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.393 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.653 { 00:17:29.653 "cntlid": 85, 00:17:29.653 "qid": 0, 00:17:29.653 "state": "enabled", 00:17:29.653 "thread": "nvmf_tgt_poll_group_000", 00:17:29.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:29.653 "listen_address": { 00:17:29.653 "trtype": "TCP", 00:17:29.653 "adrfam": "IPv4", 00:17:29.653 "traddr": "10.0.0.2", 00:17:29.653 "trsvcid": "4420" 00:17:29.653 }, 00:17:29.653 "peer_address": { 00:17:29.653 "trtype": "TCP", 00:17:29.653 "adrfam": "IPv4", 00:17:29.653 "traddr": "10.0.0.1", 00:17:29.653 "trsvcid": "44122" 00:17:29.653 }, 00:17:29.653 "auth": { 00:17:29.653 "state": "completed", 00:17:29.653 "digest": "sha384", 00:17:29.653 "dhgroup": "ffdhe6144" 00:17:29.653 } 00:17:29.653 } 00:17:29.653 ]' 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.653 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.914 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.914 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.914 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.914 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:29.914 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:30.857 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.858 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.118 00:17:31.118 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.118 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.118 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.380 { 00:17:31.380 "cntlid": 87, 00:17:31.380 "qid": 0, 00:17:31.380 "state": "enabled", 00:17:31.380 "thread": "nvmf_tgt_poll_group_000", 00:17:31.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:31.380 "listen_address": { 00:17:31.380 "trtype": "TCP", 00:17:31.380 "adrfam": "IPv4", 00:17:31.380 "traddr": "10.0.0.2", 00:17:31.380 "trsvcid": "4420" 00:17:31.380 }, 00:17:31.380 "peer_address": { 00:17:31.380 "trtype": "TCP", 00:17:31.380 "adrfam": "IPv4", 00:17:31.380 "traddr": "10.0.0.1", 00:17:31.380 "trsvcid": "45728" 00:17:31.380 }, 00:17:31.380 "auth": { 00:17:31.380 "state": "completed", 00:17:31.380 "digest": "sha384", 00:17:31.380 "dhgroup": "ffdhe6144" 00:17:31.380 } 00:17:31.380 } 00:17:31.380 ]' 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.380 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:31.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:32.212 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.473 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.474 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.474 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.474 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.045 00:17:33.045 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.045 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.045 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.306 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.306 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.306 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.306 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.306 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.306 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.306 { 00:17:33.306 "cntlid": 89, 00:17:33.306 "qid": 0, 00:17:33.306 "state": "enabled", 00:17:33.306 "thread": "nvmf_tgt_poll_group_000", 00:17:33.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:33.306 "listen_address": { 00:17:33.306 "trtype": "TCP", 00:17:33.306 "adrfam": "IPv4", 00:17:33.306 "traddr": "10.0.0.2", 00:17:33.306 "trsvcid": "4420" 00:17:33.306 }, 00:17:33.306 "peer_address": { 00:17:33.306 "trtype": "TCP", 00:17:33.306 "adrfam": "IPv4", 00:17:33.306 "traddr": "10.0.0.1", 00:17:33.306 "trsvcid": "45754" 00:17:33.306 }, 00:17:33.306 "auth": { 00:17:33.306 "state": "completed", 00:17:33.306 "digest": "sha384", 00:17:33.306 "dhgroup": "ffdhe8192" 00:17:33.306 } 00:17:33.306 } 00:17:33.306 ]' 00:17:33.306 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.306 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.306 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.306 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.306 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.306 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.306 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.306 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.568 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:33.568 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:34.139 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.139 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:34.139 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.139 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.139 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.139 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.139 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.139 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.400 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.971 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.971 { 00:17:34.971 "cntlid": 91, 00:17:34.971 "qid": 0, 00:17:34.971 "state": "enabled", 00:17:34.971 "thread": "nvmf_tgt_poll_group_000", 00:17:34.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:34.971 "listen_address": { 00:17:34.971 "trtype": "TCP", 00:17:34.971 "adrfam": "IPv4", 00:17:34.971 "traddr": "10.0.0.2", 00:17:34.971 "trsvcid": "4420" 00:17:34.971 }, 00:17:34.971 "peer_address": { 00:17:34.971 "trtype": "TCP", 00:17:34.971 "adrfam": "IPv4", 00:17:34.971 "traddr": "10.0.0.1", 00:17:34.971 "trsvcid": "45780" 00:17:34.971 }, 00:17:34.971 "auth": { 00:17:34.971 "state": "completed", 00:17:34.971 "digest": "sha384", 00:17:34.971 "dhgroup": "ffdhe8192" 00:17:34.971 } 00:17:34.971 } 00:17:34.971 ]' 00:17:34.971 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.232 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.232 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.232 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.232 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.232 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.232 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.232 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.493 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:35.493 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:36.064 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.064 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:36.064 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.064 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.064 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.064 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.064 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.064 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.325 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.896 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.896 { 00:17:36.896 "cntlid": 93, 00:17:36.896 "qid": 0, 00:17:36.896 "state": "enabled", 00:17:36.896 "thread": "nvmf_tgt_poll_group_000", 00:17:36.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:36.896 "listen_address": { 00:17:36.896 "trtype": "TCP", 00:17:36.896 "adrfam": "IPv4", 00:17:36.896 "traddr": "10.0.0.2", 00:17:36.896 "trsvcid": "4420" 00:17:36.896 }, 00:17:36.896 "peer_address": { 00:17:36.896 "trtype": "TCP", 00:17:36.896 "adrfam": "IPv4", 00:17:36.896 "traddr": "10.0.0.1", 00:17:36.896 "trsvcid": "45814" 00:17:36.896 }, 00:17:36.896 "auth": { 00:17:36.896 "state": "completed", 00:17:36.896 "digest": "sha384", 00:17:36.896 "dhgroup": "ffdhe8192" 00:17:36.896 } 00:17:36.896 } 00:17:36.896 ]' 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.896 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.157 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.157 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.157 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.157 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.157 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.157 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:37.157 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.099 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.671 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.671 { 00:17:38.671 "cntlid": 95, 00:17:38.671 "qid": 0, 00:17:38.671 "state": "enabled", 00:17:38.671 "thread": "nvmf_tgt_poll_group_000", 00:17:38.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:38.671 "listen_address": { 00:17:38.671 "trtype": "TCP", 00:17:38.671 "adrfam": "IPv4", 00:17:38.671 "traddr": "10.0.0.2", 00:17:38.671 "trsvcid": "4420" 00:17:38.671 }, 00:17:38.671 "peer_address": { 00:17:38.671 "trtype": "TCP", 00:17:38.671 "adrfam": "IPv4", 00:17:38.671 "traddr": "10.0.0.1", 00:17:38.671 "trsvcid": "45830" 00:17:38.671 }, 00:17:38.671 "auth": { 00:17:38.671 "state": "completed", 00:17:38.671 "digest": "sha384", 00:17:38.671 "dhgroup": "ffdhe8192" 00:17:38.671 } 00:17:38.671 } 00:17:38.671 ]' 00:17:38.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.932 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.932 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.933 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.933 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.933 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.933 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.933 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.193 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:39.193 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:39.763 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.763 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:39.763 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.763 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.763 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.763 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:39.763 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.763 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.763 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.764 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.024 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.025 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.025 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.286 00:17:40.286 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.286 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.286 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.286 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.286 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.286 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.286 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.286 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.286 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.286 { 00:17:40.286 "cntlid": 97, 00:17:40.286 "qid": 0, 00:17:40.286 "state": "enabled", 00:17:40.286 "thread": "nvmf_tgt_poll_group_000", 00:17:40.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:40.286 "listen_address": { 00:17:40.286 "trtype": "TCP", 00:17:40.286 "adrfam": "IPv4", 00:17:40.286 "traddr": "10.0.0.2", 00:17:40.286 "trsvcid": "4420" 00:17:40.286 }, 00:17:40.286 "peer_address": { 00:17:40.286 "trtype": "TCP", 00:17:40.286 "adrfam": "IPv4", 00:17:40.286 "traddr": "10.0.0.1", 00:17:40.286 "trsvcid": "45840" 00:17:40.286 }, 00:17:40.286 "auth": { 00:17:40.286 "state": "completed", 00:17:40.286 "digest": "sha512", 00:17:40.286 "dhgroup": "null" 00:17:40.286 } 00:17:40.286 } 00:17:40.286 ]' 00:17:40.547 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.547 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.547 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.547 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:40.547 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.547 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.547 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.547 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.807 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:40.807 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:41.378 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.378 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:41.378 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.378 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.378 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.378 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.378 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.378 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.639 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.900 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.900 { 00:17:41.900 "cntlid": 99, 00:17:41.900 "qid": 0, 00:17:41.900 "state": "enabled", 00:17:41.900 "thread": "nvmf_tgt_poll_group_000", 00:17:41.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:41.900 "listen_address": { 00:17:41.900 "trtype": "TCP", 00:17:41.900 "adrfam": "IPv4", 00:17:41.900 "traddr": "10.0.0.2", 00:17:41.900 "trsvcid": "4420" 00:17:41.900 }, 00:17:41.900 "peer_address": { 00:17:41.900 "trtype": "TCP", 00:17:41.900 "adrfam": "IPv4", 00:17:41.900 "traddr": "10.0.0.1", 00:17:41.900 "trsvcid": "48280" 00:17:41.900 }, 00:17:41.900 "auth": { 00:17:41.900 "state": "completed", 00:17:41.900 "digest": "sha512", 00:17:41.900 "dhgroup": "null" 00:17:41.900 } 00:17:41.900 } 00:17:41.900 ]' 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.900 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.161 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.161 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.161 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.161 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.161 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.161 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.161 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:42.161 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.102 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:43.103 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.103 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.103 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.103 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.103 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.103 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.103 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.103 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.363 00:17:43.363 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.363 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.363 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.625 { 00:17:43.625 "cntlid": 101, 00:17:43.625 "qid": 0, 00:17:43.625 "state": "enabled", 00:17:43.625 "thread": "nvmf_tgt_poll_group_000", 00:17:43.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:43.625 "listen_address": { 00:17:43.625 "trtype": "TCP", 00:17:43.625 "adrfam": "IPv4", 00:17:43.625 "traddr": "10.0.0.2", 00:17:43.625 "trsvcid": "4420" 00:17:43.625 }, 00:17:43.625 "peer_address": { 00:17:43.625 "trtype": "TCP", 00:17:43.625 "adrfam": "IPv4", 00:17:43.625 "traddr": "10.0.0.1", 00:17:43.625 "trsvcid": "48304" 00:17:43.625 }, 00:17:43.625 "auth": { 00:17:43.625 "state": "completed", 00:17:43.625 "digest": "sha512", 00:17:43.625 "dhgroup": "null" 00:17:43.625 } 00:17:43.625 } 00:17:43.625 ]' 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.625 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.626 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.626 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.887 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:43.887 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:44.460 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.460 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:44.460 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.460 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.460 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.460 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.460 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.460 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.721 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.981 00:17:44.981 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.981 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.981 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.243 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.243 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.243 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.243 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.243 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.243 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.243 { 00:17:45.243 "cntlid": 103, 00:17:45.243 "qid": 0, 00:17:45.243 "state": "enabled", 00:17:45.243 "thread": "nvmf_tgt_poll_group_000", 00:17:45.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:45.243 "listen_address": { 00:17:45.243 "trtype": "TCP", 00:17:45.243 "adrfam": "IPv4", 00:17:45.243 "traddr": "10.0.0.2", 00:17:45.243 "trsvcid": "4420" 00:17:45.243 }, 00:17:45.243 "peer_address": { 00:17:45.243 "trtype": "TCP", 00:17:45.243 "adrfam": "IPv4", 00:17:45.243 "traddr": "10.0.0.1", 00:17:45.243 "trsvcid": "48328" 00:17:45.243 }, 00:17:45.243 "auth": { 00:17:45.243 "state": "completed", 00:17:45.243 "digest": "sha512", 00:17:45.243 "dhgroup": "null" 00:17:45.243 } 00:17:45.243 } 00:17:45.243 ]' 00:17:45.243 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.243 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.243 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.243 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:45.243 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.243 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.243 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.243 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.504 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:45.504 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:46.076 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.076 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:46.076 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.076 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.076 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.076 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.076 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.076 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.076 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.337 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.598 00:17:46.598 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.598 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.598 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.860 { 00:17:46.860 "cntlid": 105, 00:17:46.860 "qid": 0, 00:17:46.860 "state": "enabled", 00:17:46.860 "thread": "nvmf_tgt_poll_group_000", 00:17:46.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:46.860 "listen_address": { 00:17:46.860 "trtype": "TCP", 00:17:46.860 "adrfam": "IPv4", 00:17:46.860 "traddr": "10.0.0.2", 00:17:46.860 "trsvcid": "4420" 00:17:46.860 }, 00:17:46.860 "peer_address": { 00:17:46.860 "trtype": "TCP", 00:17:46.860 "adrfam": "IPv4", 00:17:46.860 "traddr": "10.0.0.1", 00:17:46.860 "trsvcid": "48354" 00:17:46.860 }, 00:17:46.860 "auth": { 00:17:46.860 "state": "completed", 00:17:46.860 "digest": "sha512", 00:17:46.860 "dhgroup": "ffdhe2048" 00:17:46.860 } 00:17:46.860 } 00:17:46.860 ]' 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.860 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.121 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:47.121 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:47.692 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.692 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:47.692 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.693 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.693 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.693 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.693 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.693 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.954 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.214 00:17:48.214 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.214 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.215 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.476 { 00:17:48.476 "cntlid": 107, 00:17:48.476 "qid": 0, 00:17:48.476 "state": "enabled", 00:17:48.476 "thread": "nvmf_tgt_poll_group_000", 00:17:48.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:48.476 "listen_address": { 00:17:48.476 "trtype": "TCP", 00:17:48.476 "adrfam": "IPv4", 00:17:48.476 "traddr": "10.0.0.2", 00:17:48.476 "trsvcid": "4420" 00:17:48.476 }, 00:17:48.476 "peer_address": { 00:17:48.476 "trtype": "TCP", 00:17:48.476 "adrfam": "IPv4", 00:17:48.476 "traddr": "10.0.0.1", 00:17:48.476 "trsvcid": "48392" 00:17:48.476 }, 00:17:48.476 "auth": { 00:17:48.476 "state": "completed", 00:17:48.476 "digest": "sha512", 00:17:48.476 "dhgroup": "ffdhe2048" 00:17:48.476 } 00:17:48.476 } 00:17:48.476 ]' 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.476 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.738 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:48.738 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:49.311 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.311 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:49.311 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.311 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.311 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.311 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.311 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.311 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.572 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.833 00:17:49.833 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.833 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.833 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.094 { 00:17:50.094 "cntlid": 109, 00:17:50.094 "qid": 0, 00:17:50.094 "state": "enabled", 00:17:50.094 "thread": "nvmf_tgt_poll_group_000", 00:17:50.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:50.094 "listen_address": { 00:17:50.094 "trtype": "TCP", 00:17:50.094 "adrfam": "IPv4", 00:17:50.094 "traddr": "10.0.0.2", 00:17:50.094 "trsvcid": "4420" 00:17:50.094 }, 00:17:50.094 "peer_address": { 00:17:50.094 "trtype": "TCP", 00:17:50.094 "adrfam": "IPv4", 00:17:50.094 "traddr": "10.0.0.1", 00:17:50.094 "trsvcid": "48418" 00:17:50.094 }, 00:17:50.094 "auth": { 00:17:50.094 "state": "completed", 00:17:50.094 "digest": "sha512", 00:17:50.094 "dhgroup": "ffdhe2048" 00:17:50.094 } 00:17:50.094 } 00:17:50.094 ]' 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.094 13:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.355 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:50.355 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:51.005 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.005 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:51.005 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.005 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.005 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.005 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.005 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.005 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.292 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:51.292 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.292 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.292 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:51.292 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.292 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.292 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:51.292 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.292 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.292 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.292 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.292 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.292 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.574 00:17:51.574 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.574 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.574 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.574 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.574 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.574 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.574 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.870 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.870 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.870 { 00:17:51.870 "cntlid": 111, 00:17:51.870 "qid": 0, 00:17:51.870 "state": "enabled", 00:17:51.870 "thread": "nvmf_tgt_poll_group_000", 00:17:51.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:51.870 "listen_address": { 00:17:51.870 "trtype": "TCP", 00:17:51.870 "adrfam": "IPv4", 00:17:51.870 "traddr": "10.0.0.2", 00:17:51.870 "trsvcid": "4420" 00:17:51.870 }, 00:17:51.870 "peer_address": { 00:17:51.870 "trtype": "TCP", 00:17:51.870 "adrfam": "IPv4", 00:17:51.870 "traddr": "10.0.0.1", 00:17:51.870 "trsvcid": "54248" 00:17:51.870 }, 00:17:51.870 "auth": { 00:17:51.870 "state": "completed", 00:17:51.870 "digest": "sha512", 00:17:51.870 "dhgroup": "ffdhe2048" 00:17:51.870 } 00:17:51.870 } 00:17:51.870 ]' 00:17:51.870 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.870 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.870 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.870 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.870 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.870 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.871 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.871 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.131 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:52.131 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:52.702 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.702 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:52.702 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.702 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.702 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.702 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.702 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.702 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.702 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.962 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.963 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.963 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.963 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.963 00:17:53.223 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.223 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.223 13:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.223 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.223 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.223 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.223 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.223 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.223 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.223 { 00:17:53.223 "cntlid": 113, 00:17:53.223 "qid": 0, 00:17:53.223 "state": "enabled", 00:17:53.223 "thread": "nvmf_tgt_poll_group_000", 00:17:53.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:53.223 "listen_address": { 00:17:53.223 "trtype": "TCP", 00:17:53.223 "adrfam": "IPv4", 00:17:53.223 "traddr": "10.0.0.2", 00:17:53.223 "trsvcid": "4420" 00:17:53.223 }, 00:17:53.223 "peer_address": { 00:17:53.223 "trtype": "TCP", 00:17:53.223 "adrfam": "IPv4", 00:17:53.223 "traddr": "10.0.0.1", 00:17:53.223 "trsvcid": "54266" 00:17:53.223 }, 00:17:53.223 "auth": { 00:17:53.223 "state": "completed", 00:17:53.223 "digest": "sha512", 00:17:53.223 "dhgroup": "ffdhe3072" 00:17:53.223 } 00:17:53.223 } 00:17:53.223 ]' 00:17:53.223 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.483 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.483 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.483 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.483 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.483 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.483 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.483 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.744 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:53.744 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:17:54.314 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.314 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:54.314 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.314 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.314 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.314 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.314 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.314 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.574 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:54.574 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.574 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.574 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.574 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.574 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.575 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.575 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.575 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.575 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.575 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.575 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.575 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.835 00:17:54.835 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.835 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.835 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.835 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.835 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.835 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.835 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.835 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.095 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.095 { 00:17:55.095 "cntlid": 115, 00:17:55.095 "qid": 0, 00:17:55.095 "state": "enabled", 00:17:55.095 "thread": "nvmf_tgt_poll_group_000", 00:17:55.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:55.095 "listen_address": { 00:17:55.095 "trtype": "TCP", 00:17:55.095 "adrfam": "IPv4", 00:17:55.095 "traddr": "10.0.0.2", 00:17:55.095 "trsvcid": "4420" 00:17:55.095 }, 00:17:55.095 "peer_address": { 00:17:55.095 "trtype": "TCP", 00:17:55.095 "adrfam": "IPv4", 00:17:55.095 "traddr": "10.0.0.1", 00:17:55.095 "trsvcid": "54304" 00:17:55.095 }, 00:17:55.095 "auth": { 00:17:55.095 "state": "completed", 00:17:55.095 "digest": "sha512", 00:17:55.095 "dhgroup": "ffdhe3072" 00:17:55.095 } 00:17:55.095 } 00:17:55.095 ]' 00:17:55.095 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.095 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.095 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.095 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.095 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.095 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.095 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.095 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.355 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:55.355 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:17:55.927 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.927 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:55.927 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.927 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.927 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.927 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.927 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.927 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.187 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.448 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.448 { 00:17:56.448 "cntlid": 117, 00:17:56.448 "qid": 0, 00:17:56.448 "state": "enabled", 00:17:56.448 "thread": "nvmf_tgt_poll_group_000", 00:17:56.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:56.448 "listen_address": { 00:17:56.448 "trtype": "TCP", 00:17:56.448 "adrfam": "IPv4", 00:17:56.448 "traddr": "10.0.0.2", 00:17:56.448 "trsvcid": "4420" 00:17:56.448 }, 00:17:56.448 "peer_address": { 00:17:56.448 "trtype": "TCP", 00:17:56.448 "adrfam": "IPv4", 00:17:56.448 "traddr": "10.0.0.1", 00:17:56.448 "trsvcid": "54334" 00:17:56.448 }, 00:17:56.448 "auth": { 00:17:56.448 "state": "completed", 00:17:56.448 "digest": "sha512", 00:17:56.448 "dhgroup": "ffdhe3072" 00:17:56.448 } 00:17:56.448 } 00:17:56.448 ]' 00:17:56.448 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.708 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.708 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.708 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.708 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.708 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.708 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.709 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.969 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:56.970 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:17:57.543 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.543 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:57.543 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.543 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.543 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.543 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.543 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.543 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.804 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.065 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.065 { 00:17:58.065 "cntlid": 119, 00:17:58.065 "qid": 0, 00:17:58.065 "state": "enabled", 00:17:58.065 "thread": "nvmf_tgt_poll_group_000", 00:17:58.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:58.065 "listen_address": { 00:17:58.065 "trtype": "TCP", 00:17:58.065 "adrfam": "IPv4", 00:17:58.065 "traddr": "10.0.0.2", 00:17:58.065 "trsvcid": "4420" 00:17:58.065 }, 00:17:58.065 "peer_address": { 00:17:58.065 "trtype": "TCP", 00:17:58.065 "adrfam": "IPv4", 00:17:58.065 "traddr": "10.0.0.1", 00:17:58.065 "trsvcid": "54368" 00:17:58.065 }, 00:17:58.065 "auth": { 00:17:58.065 "state": "completed", 00:17:58.065 "digest": "sha512", 00:17:58.065 "dhgroup": "ffdhe3072" 00:17:58.065 } 00:17:58.065 } 00:17:58.065 ]' 00:17:58.065 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.326 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.326 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.326 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.326 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.326 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.326 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.326 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.586 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:58.586 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:17:59.159 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.159 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:59.159 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.159 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.159 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.159 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.159 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.159 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.159 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.420 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.682 00:17:59.682 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.682 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.682 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.682 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.682 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.682 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.943 { 00:17:59.943 "cntlid": 121, 00:17:59.943 "qid": 0, 00:17:59.943 "state": "enabled", 00:17:59.943 "thread": "nvmf_tgt_poll_group_000", 00:17:59.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:59.943 "listen_address": { 00:17:59.943 "trtype": "TCP", 00:17:59.943 "adrfam": "IPv4", 00:17:59.943 "traddr": "10.0.0.2", 00:17:59.943 "trsvcid": "4420" 00:17:59.943 }, 00:17:59.943 "peer_address": { 00:17:59.943 "trtype": "TCP", 00:17:59.943 "adrfam": "IPv4", 00:17:59.943 "traddr": "10.0.0.1", 00:17:59.943 "trsvcid": "54396" 00:17:59.943 }, 00:17:59.943 "auth": { 00:17:59.943 "state": "completed", 00:17:59.943 "digest": "sha512", 00:17:59.943 "dhgroup": "ffdhe4096" 00:17:59.943 } 00:17:59.943 } 00:17:59.943 ]' 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.943 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.203 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:18:00.203 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:18:00.775 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.775 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:00.775 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.775 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.775 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.775 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.775 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.775 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.037 13:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.298 00:18:01.298 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.298 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.298 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.298 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.298 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.298 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.298 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.558 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.558 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.558 { 00:18:01.558 "cntlid": 123, 00:18:01.558 "qid": 0, 00:18:01.558 "state": "enabled", 00:18:01.558 "thread": "nvmf_tgt_poll_group_000", 00:18:01.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:01.558 "listen_address": { 00:18:01.558 "trtype": "TCP", 00:18:01.558 "adrfam": "IPv4", 00:18:01.558 "traddr": "10.0.0.2", 00:18:01.558 "trsvcid": "4420" 00:18:01.558 }, 00:18:01.558 "peer_address": { 00:18:01.558 "trtype": "TCP", 00:18:01.558 "adrfam": "IPv4", 00:18:01.558 "traddr": "10.0.0.1", 00:18:01.558 "trsvcid": "46916" 00:18:01.558 }, 00:18:01.558 "auth": { 00:18:01.558 "state": "completed", 00:18:01.558 "digest": "sha512", 00:18:01.558 "dhgroup": "ffdhe4096" 00:18:01.558 } 00:18:01.558 } 00:18:01.558 ]' 00:18:01.558 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.558 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.558 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.558 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.559 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.559 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.559 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.559 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.819 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:18:01.819 13:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:18:02.390 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.390 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:02.390 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.390 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.390 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.390 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.390 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.390 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.651 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.912 00:18:02.912 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.912 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.912 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.172 { 00:18:03.172 "cntlid": 125, 00:18:03.172 "qid": 0, 00:18:03.172 "state": "enabled", 00:18:03.172 "thread": "nvmf_tgt_poll_group_000", 00:18:03.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:03.172 "listen_address": { 00:18:03.172 "trtype": "TCP", 00:18:03.172 "adrfam": "IPv4", 00:18:03.172 "traddr": "10.0.0.2", 00:18:03.172 "trsvcid": "4420" 00:18:03.172 }, 00:18:03.172 "peer_address": { 00:18:03.172 "trtype": "TCP", 00:18:03.172 "adrfam": "IPv4", 00:18:03.172 "traddr": "10.0.0.1", 00:18:03.172 "trsvcid": "46938" 00:18:03.172 }, 00:18:03.172 "auth": { 00:18:03.172 "state": "completed", 00:18:03.172 "digest": "sha512", 00:18:03.172 "dhgroup": "ffdhe4096" 00:18:03.172 } 00:18:03.172 } 00:18:03.172 ]' 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.172 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.172 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.172 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.172 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.432 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:18:03.432 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:18:04.003 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.003 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:04.003 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.003 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.003 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.003 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.003 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.003 13:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.263 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.524 00:18:04.524 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.524 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.524 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.784 { 00:18:04.784 "cntlid": 127, 00:18:04.784 "qid": 0, 00:18:04.784 "state": "enabled", 00:18:04.784 "thread": "nvmf_tgt_poll_group_000", 00:18:04.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:04.784 "listen_address": { 00:18:04.784 "trtype": "TCP", 00:18:04.784 "adrfam": "IPv4", 00:18:04.784 "traddr": "10.0.0.2", 00:18:04.784 "trsvcid": "4420" 00:18:04.784 }, 00:18:04.784 "peer_address": { 00:18:04.784 "trtype": "TCP", 00:18:04.784 "adrfam": "IPv4", 00:18:04.784 "traddr": "10.0.0.1", 00:18:04.784 "trsvcid": "46964" 00:18:04.784 }, 00:18:04.784 "auth": { 00:18:04.784 "state": "completed", 00:18:04.784 "digest": "sha512", 00:18:04.784 "dhgroup": "ffdhe4096" 00:18:04.784 } 00:18:04.784 } 00:18:04.784 ]' 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.784 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.044 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:05.045 13:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:05.615 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.615 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:05.616 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.616 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.616 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.616 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.616 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.616 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.616 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.875 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.135 00:18:06.135 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.135 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.135 13:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.395 { 00:18:06.395 "cntlid": 129, 00:18:06.395 "qid": 0, 00:18:06.395 "state": "enabled", 00:18:06.395 "thread": "nvmf_tgt_poll_group_000", 00:18:06.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:06.395 "listen_address": { 00:18:06.395 "trtype": "TCP", 00:18:06.395 "adrfam": "IPv4", 00:18:06.395 "traddr": "10.0.0.2", 00:18:06.395 "trsvcid": "4420" 00:18:06.395 }, 00:18:06.395 "peer_address": { 00:18:06.395 "trtype": "TCP", 00:18:06.395 "adrfam": "IPv4", 00:18:06.395 "traddr": "10.0.0.1", 00:18:06.395 "trsvcid": "46984" 00:18:06.395 }, 00:18:06.395 "auth": { 00:18:06.395 "state": "completed", 00:18:06.395 "digest": "sha512", 00:18:06.395 "dhgroup": "ffdhe6144" 00:18:06.395 } 00:18:06.395 } 00:18:06.395 ]' 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.395 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.656 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.656 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.656 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.656 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:18:06.656 13:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.596 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.857 00:18:07.857 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.857 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.857 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.117 { 00:18:08.117 "cntlid": 131, 00:18:08.117 "qid": 0, 00:18:08.117 "state": "enabled", 00:18:08.117 "thread": "nvmf_tgt_poll_group_000", 00:18:08.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:08.117 "listen_address": { 00:18:08.117 "trtype": "TCP", 00:18:08.117 "adrfam": "IPv4", 00:18:08.117 "traddr": "10.0.0.2", 00:18:08.117 "trsvcid": "4420" 00:18:08.117 }, 00:18:08.117 "peer_address": { 00:18:08.117 "trtype": "TCP", 00:18:08.117 "adrfam": "IPv4", 00:18:08.117 "traddr": "10.0.0.1", 00:18:08.117 "trsvcid": "47012" 00:18:08.117 }, 00:18:08.117 "auth": { 00:18:08.117 "state": "completed", 00:18:08.117 "digest": "sha512", 00:18:08.117 "dhgroup": "ffdhe6144" 00:18:08.117 } 00:18:08.117 } 00:18:08.117 ]' 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.117 13:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.377 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.377 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.377 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.377 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:18:08.377 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:18:09.319 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.319 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:09.319 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.319 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.319 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.319 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.319 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.319 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.319 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.580 00:18:09.580 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.580 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.580 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.840 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.840 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.840 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.840 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.840 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.840 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.840 { 00:18:09.840 "cntlid": 133, 00:18:09.840 "qid": 0, 00:18:09.840 "state": "enabled", 00:18:09.840 "thread": "nvmf_tgt_poll_group_000", 00:18:09.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:09.840 "listen_address": { 00:18:09.840 "trtype": "TCP", 00:18:09.840 "adrfam": "IPv4", 00:18:09.840 "traddr": "10.0.0.2", 00:18:09.840 "trsvcid": "4420" 00:18:09.840 }, 00:18:09.840 "peer_address": { 00:18:09.840 "trtype": "TCP", 00:18:09.840 "adrfam": "IPv4", 00:18:09.840 "traddr": "10.0.0.1", 00:18:09.840 "trsvcid": "47032" 00:18:09.840 }, 00:18:09.840 "auth": { 00:18:09.840 "state": "completed", 00:18:09.840 "digest": "sha512", 00:18:09.840 "dhgroup": "ffdhe6144" 00:18:09.840 } 00:18:09.840 } 00:18:09.840 ]' 00:18:09.841 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.841 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.841 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.841 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.841 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.101 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.101 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.101 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.101 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:18:10.101 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.044 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.304 00:18:11.304 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.304 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.304 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.565 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.565 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.565 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.566 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.566 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.566 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.566 { 00:18:11.566 "cntlid": 135, 00:18:11.566 "qid": 0, 00:18:11.566 "state": "enabled", 00:18:11.566 "thread": "nvmf_tgt_poll_group_000", 00:18:11.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:11.566 "listen_address": { 00:18:11.566 "trtype": "TCP", 00:18:11.566 "adrfam": "IPv4", 00:18:11.566 "traddr": "10.0.0.2", 00:18:11.566 "trsvcid": "4420" 00:18:11.566 }, 00:18:11.566 "peer_address": { 00:18:11.566 "trtype": "TCP", 00:18:11.566 "adrfam": "IPv4", 00:18:11.566 "traddr": "10.0.0.1", 00:18:11.566 "trsvcid": "42856" 00:18:11.566 }, 00:18:11.566 "auth": { 00:18:11.566 "state": "completed", 00:18:11.566 "digest": "sha512", 00:18:11.566 "dhgroup": "ffdhe6144" 00:18:11.566 } 00:18:11.566 } 00:18:11.566 ]' 00:18:11.566 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.566 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.566 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.566 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.566 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.827 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.827 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.827 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.827 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:11.827 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.769 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.342 00:18:13.342 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.342 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.342 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.342 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.342 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.342 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.342 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.342 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.342 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.342 { 00:18:13.342 "cntlid": 137, 00:18:13.342 "qid": 0, 00:18:13.342 "state": "enabled", 00:18:13.342 "thread": "nvmf_tgt_poll_group_000", 00:18:13.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:13.342 "listen_address": { 00:18:13.342 "trtype": "TCP", 00:18:13.342 "adrfam": "IPv4", 00:18:13.342 "traddr": "10.0.0.2", 00:18:13.342 "trsvcid": "4420" 00:18:13.342 }, 00:18:13.342 "peer_address": { 00:18:13.342 "trtype": "TCP", 00:18:13.342 "adrfam": "IPv4", 00:18:13.342 "traddr": "10.0.0.1", 00:18:13.342 "trsvcid": "42890" 00:18:13.342 }, 00:18:13.342 "auth": { 00:18:13.342 "state": "completed", 00:18:13.342 "digest": "sha512", 00:18:13.342 "dhgroup": "ffdhe8192" 00:18:13.342 } 00:18:13.342 } 00:18:13.342 ]' 00:18:13.342 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.342 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.342 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.603 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.603 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.603 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.603 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.603 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.603 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:18:13.603 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.546 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.119 00:18:15.119 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.119 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.119 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.119 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.119 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.119 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.119 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.380 { 00:18:15.380 "cntlid": 139, 00:18:15.380 "qid": 0, 00:18:15.380 "state": "enabled", 00:18:15.380 "thread": "nvmf_tgt_poll_group_000", 00:18:15.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:15.380 "listen_address": { 00:18:15.380 "trtype": "TCP", 00:18:15.380 "adrfam": "IPv4", 00:18:15.380 "traddr": "10.0.0.2", 00:18:15.380 "trsvcid": "4420" 00:18:15.380 }, 00:18:15.380 "peer_address": { 00:18:15.380 "trtype": "TCP", 00:18:15.380 "adrfam": "IPv4", 00:18:15.380 "traddr": "10.0.0.1", 00:18:15.380 "trsvcid": "42924" 00:18:15.380 }, 00:18:15.380 "auth": { 00:18:15.380 "state": "completed", 00:18:15.380 "digest": "sha512", 00:18:15.380 "dhgroup": "ffdhe8192" 00:18:15.380 } 00:18:15.380 } 00:18:15.380 ]' 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.380 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.642 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:18:15.642 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: --dhchap-ctrl-secret DHHC-1:02:MTI0ZTQ1YThlZDRhMjg4Zjg2MjdkMjM4MThjZjBjMGJiOWI4YjkzMmFkOGJiMWJjElasPw==: 00:18:16.214 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.214 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:16.214 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.214 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.214 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.214 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.214 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.214 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.475 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.048 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.048 { 00:18:17.048 "cntlid": 141, 00:18:17.048 "qid": 0, 00:18:17.048 "state": "enabled", 00:18:17.048 "thread": "nvmf_tgt_poll_group_000", 00:18:17.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:17.048 "listen_address": { 00:18:17.048 "trtype": "TCP", 00:18:17.048 "adrfam": "IPv4", 00:18:17.048 "traddr": "10.0.0.2", 00:18:17.048 "trsvcid": "4420" 00:18:17.048 }, 00:18:17.048 "peer_address": { 00:18:17.048 "trtype": "TCP", 00:18:17.048 "adrfam": "IPv4", 00:18:17.048 "traddr": "10.0.0.1", 00:18:17.048 "trsvcid": "42954" 00:18:17.048 }, 00:18:17.048 "auth": { 00:18:17.048 "state": "completed", 00:18:17.048 "digest": "sha512", 00:18:17.048 "dhgroup": "ffdhe8192" 00:18:17.048 } 00:18:17.048 } 00:18:17.048 ]' 00:18:17.048 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.310 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.310 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.310 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.310 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.310 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.310 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.310 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.571 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:18:17.571 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:01:ZDdlMjFjMDI1OTJiYjY4YWZlOWIyNzEwMWMzNjY4NjHHh+Rk: 00:18:18.144 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.144 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:18.144 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.144 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.144 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.144 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.144 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.144 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.411 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.719 00:18:18.719 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.719 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.719 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.034 { 00:18:19.034 "cntlid": 143, 00:18:19.034 "qid": 0, 00:18:19.034 "state": "enabled", 00:18:19.034 "thread": "nvmf_tgt_poll_group_000", 00:18:19.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:19.034 "listen_address": { 00:18:19.034 "trtype": "TCP", 00:18:19.034 "adrfam": "IPv4", 00:18:19.034 "traddr": "10.0.0.2", 00:18:19.034 "trsvcid": "4420" 00:18:19.034 }, 00:18:19.034 "peer_address": { 00:18:19.034 "trtype": "TCP", 00:18:19.034 "adrfam": "IPv4", 00:18:19.034 "traddr": "10.0.0.1", 00:18:19.034 "trsvcid": "42980" 00:18:19.034 }, 00:18:19.034 "auth": { 00:18:19.034 "state": "completed", 00:18:19.034 "digest": "sha512", 00:18:19.034 "dhgroup": "ffdhe8192" 00:18:19.034 } 00:18:19.034 } 00:18:19.034 ]' 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.034 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.296 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.296 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.296 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.296 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:19.296 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:19.869 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.131 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.704 00:18:20.704 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.704 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.704 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.966 { 00:18:20.966 "cntlid": 145, 00:18:20.966 "qid": 0, 00:18:20.966 "state": "enabled", 00:18:20.966 "thread": "nvmf_tgt_poll_group_000", 00:18:20.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:20.966 "listen_address": { 00:18:20.966 "trtype": "TCP", 00:18:20.966 "adrfam": "IPv4", 00:18:20.966 "traddr": "10.0.0.2", 00:18:20.966 "trsvcid": "4420" 00:18:20.966 }, 00:18:20.966 "peer_address": { 00:18:20.966 "trtype": "TCP", 00:18:20.966 "adrfam": "IPv4", 00:18:20.966 "traddr": "10.0.0.1", 00:18:20.966 "trsvcid": "43000" 00:18:20.966 }, 00:18:20.966 "auth": { 00:18:20.966 "state": "completed", 00:18:20.966 "digest": "sha512", 00:18:20.966 "dhgroup": "ffdhe8192" 00:18:20.966 } 00:18:20.966 } 00:18:20.966 ]' 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.966 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.227 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:18:21.227 13:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:ZTI4YjhhODNiMjFiMjViYTYxMzRmZjZhNGFhZTU2MDE2ODdmMTY0M2ZmYzQ2N2Y5gKwEXQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M4MzFlOWE4ODc5Y2MxMzljOWNjNDFhNmE5OWI4MzE2OTAzYWYzZjhhMmRhOGNmNGVkODQ1YWY2NDBhZmIzZqtat08=: 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:21.799 13:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:22.371 request: 00:18:22.371 { 00:18:22.371 "name": "nvme0", 00:18:22.371 "trtype": "tcp", 00:18:22.371 "traddr": "10.0.0.2", 00:18:22.371 "adrfam": "ipv4", 00:18:22.371 "trsvcid": "4420", 00:18:22.371 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:22.371 "prchk_reftag": false, 00:18:22.371 "prchk_guard": false, 00:18:22.371 "hdgst": false, 00:18:22.371 "ddgst": false, 00:18:22.371 "dhchap_key": "key2", 00:18:22.371 "allow_unrecognized_csi": false, 00:18:22.371 "method": "bdev_nvme_attach_controller", 00:18:22.371 "req_id": 1 00:18:22.372 } 00:18:22.372 Got JSON-RPC error response 00:18:22.372 response: 00:18:22.372 { 00:18:22.372 "code": -5, 00:18:22.372 "message": "Input/output error" 00:18:22.372 } 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.372 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.943 request: 00:18:22.943 { 00:18:22.943 "name": "nvme0", 00:18:22.943 "trtype": "tcp", 00:18:22.943 "traddr": "10.0.0.2", 00:18:22.943 "adrfam": "ipv4", 00:18:22.943 "trsvcid": "4420", 00:18:22.943 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:22.943 "prchk_reftag": false, 00:18:22.943 "prchk_guard": false, 00:18:22.943 "hdgst": false, 00:18:22.943 "ddgst": false, 00:18:22.943 "dhchap_key": "key1", 00:18:22.943 "dhchap_ctrlr_key": "ckey2", 00:18:22.943 "allow_unrecognized_csi": false, 00:18:22.943 "method": "bdev_nvme_attach_controller", 00:18:22.943 "req_id": 1 00:18:22.943 } 00:18:22.943 Got JSON-RPC error response 00:18:22.943 response: 00:18:22.943 { 00:18:22.943 "code": -5, 00:18:22.943 "message": "Input/output error" 00:18:22.943 } 00:18:22.943 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.944 13:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.205 request: 00:18:23.205 { 00:18:23.205 "name": "nvme0", 00:18:23.205 "trtype": "tcp", 00:18:23.205 "traddr": "10.0.0.2", 00:18:23.205 "adrfam": "ipv4", 00:18:23.205 "trsvcid": "4420", 00:18:23.205 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:23.206 "prchk_reftag": false, 00:18:23.206 "prchk_guard": false, 00:18:23.206 "hdgst": false, 00:18:23.206 "ddgst": false, 00:18:23.206 "dhchap_key": "key1", 00:18:23.206 "dhchap_ctrlr_key": "ckey1", 00:18:23.206 "allow_unrecognized_csi": false, 00:18:23.206 "method": "bdev_nvme_attach_controller", 00:18:23.206 "req_id": 1 00:18:23.206 } 00:18:23.206 Got JSON-RPC error response 00:18:23.206 response: 00:18:23.206 { 00:18:23.206 "code": -5, 00:18:23.206 "message": "Input/output error" 00:18:23.206 } 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1697147 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1697147 ']' 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1697147 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:23.206 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1697147 00:18:23.466 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:23.466 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:23.466 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1697147' 00:18:23.466 killing process with pid 1697147 00:18:23.466 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1697147 00:18:23.466 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1697147 00:18:23.466 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:23.466 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:23.466 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.466 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.467 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1722990 00:18:23.467 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:23.467 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1722990 00:18:23.467 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1722990 ']' 00:18:23.467 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.467 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.467 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.467 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.467 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1722990 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1722990 ']' 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.728 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 null0 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Jjo 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.iFl ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.iFl 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HwX 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.N3W ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.N3W 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lvX 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.w7M ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.w7M 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.hNA 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.990 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.252 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.252 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.252 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.252 13:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.826 nvme0n1 00:18:24.826 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.826 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.826 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.086 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.087 { 00:18:25.087 "cntlid": 1, 00:18:25.087 "qid": 0, 00:18:25.087 "state": "enabled", 00:18:25.087 "thread": "nvmf_tgt_poll_group_000", 00:18:25.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:25.087 "listen_address": { 00:18:25.087 "trtype": "TCP", 00:18:25.087 "adrfam": "IPv4", 00:18:25.087 "traddr": "10.0.0.2", 00:18:25.087 "trsvcid": "4420" 00:18:25.087 }, 00:18:25.087 "peer_address": { 00:18:25.087 "trtype": "TCP", 00:18:25.087 "adrfam": "IPv4", 00:18:25.087 "traddr": "10.0.0.1", 00:18:25.087 "trsvcid": "53718" 00:18:25.087 }, 00:18:25.087 "auth": { 00:18:25.087 "state": "completed", 00:18:25.087 "digest": "sha512", 00:18:25.087 "dhgroup": "ffdhe8192" 00:18:25.087 } 00:18:25.087 } 00:18:25.087 ]' 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.087 13:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.348 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:25.348 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:25.920 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:26.182 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.182 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.443 request: 00:18:26.443 { 00:18:26.443 "name": "nvme0", 00:18:26.443 "trtype": "tcp", 00:18:26.443 "traddr": "10.0.0.2", 00:18:26.443 "adrfam": "ipv4", 00:18:26.443 "trsvcid": "4420", 00:18:26.443 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:26.443 "prchk_reftag": false, 00:18:26.443 "prchk_guard": false, 00:18:26.443 "hdgst": false, 00:18:26.443 "ddgst": false, 00:18:26.443 "dhchap_key": "key3", 00:18:26.443 "allow_unrecognized_csi": false, 00:18:26.443 "method": "bdev_nvme_attach_controller", 00:18:26.443 "req_id": 1 00:18:26.443 } 00:18:26.443 Got JSON-RPC error response 00:18:26.443 response: 00:18:26.443 { 00:18:26.443 "code": -5, 00:18:26.443 "message": "Input/output error" 00:18:26.443 } 00:18:26.443 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:26.443 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.443 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.443 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.443 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:26.443 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:26.443 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:26.443 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:26.703 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:26.704 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:26.704 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:26.704 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:26.704 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.704 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:26.704 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.704 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.704 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.704 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.965 request: 00:18:26.965 { 00:18:26.965 "name": "nvme0", 00:18:26.965 "trtype": "tcp", 00:18:26.965 "traddr": "10.0.0.2", 00:18:26.965 "adrfam": "ipv4", 00:18:26.966 "trsvcid": "4420", 00:18:26.966 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:26.966 "prchk_reftag": false, 00:18:26.966 "prchk_guard": false, 00:18:26.966 "hdgst": false, 00:18:26.966 "ddgst": false, 00:18:26.966 "dhchap_key": "key3", 00:18:26.966 "allow_unrecognized_csi": false, 00:18:26.966 "method": "bdev_nvme_attach_controller", 00:18:26.966 "req_id": 1 00:18:26.966 } 00:18:26.966 Got JSON-RPC error response 00:18:26.966 response: 00:18:26.966 { 00:18:26.966 "code": -5, 00:18:26.966 "message": "Input/output error" 00:18:26.966 } 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.966 13:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.539 request: 00:18:27.539 { 00:18:27.539 "name": "nvme0", 00:18:27.539 "trtype": "tcp", 00:18:27.539 "traddr": "10.0.0.2", 00:18:27.539 "adrfam": "ipv4", 00:18:27.539 "trsvcid": "4420", 00:18:27.539 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:27.539 "prchk_reftag": false, 00:18:27.539 "prchk_guard": false, 00:18:27.539 "hdgst": false, 00:18:27.539 "ddgst": false, 00:18:27.539 "dhchap_key": "key0", 00:18:27.539 "dhchap_ctrlr_key": "key1", 00:18:27.539 "allow_unrecognized_csi": false, 00:18:27.539 "method": "bdev_nvme_attach_controller", 00:18:27.539 "req_id": 1 00:18:27.539 } 00:18:27.539 Got JSON-RPC error response 00:18:27.539 response: 00:18:27.539 { 00:18:27.539 "code": -5, 00:18:27.539 "message": "Input/output error" 00:18:27.539 } 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:27.539 nvme0n1 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:27.539 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.801 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.801 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.801 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.062 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:28.062 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.062 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.062 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.062 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:28.062 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.062 13:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.634 nvme0n1 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:28.896 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.158 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.158 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:29.158 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: --dhchap-ctrl-secret DHHC-1:03:YTI4NjMxN2Q5ZWRlNmVlMjc2NGM3ZTU5YmNjYjAyYjExYzgzNWI0NzI5YmEwMmExNWEwNTVmZjU0YWRkYzhhZYYw6as=: 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:30.101 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:30.362 request: 00:18:30.362 { 00:18:30.362 "name": "nvme0", 00:18:30.362 "trtype": "tcp", 00:18:30.362 "traddr": "10.0.0.2", 00:18:30.362 "adrfam": "ipv4", 00:18:30.362 "trsvcid": "4420", 00:18:30.362 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:30.362 "prchk_reftag": false, 00:18:30.362 "prchk_guard": false, 00:18:30.362 "hdgst": false, 00:18:30.362 "ddgst": false, 00:18:30.362 "dhchap_key": "key1", 00:18:30.362 "allow_unrecognized_csi": false, 00:18:30.362 "method": "bdev_nvme_attach_controller", 00:18:30.362 "req_id": 1 00:18:30.362 } 00:18:30.362 Got JSON-RPC error response 00:18:30.362 response: 00:18:30.362 { 00:18:30.362 "code": -5, 00:18:30.362 "message": "Input/output error" 00:18:30.362 } 00:18:30.623 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:30.623 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.623 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.623 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.623 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:30.623 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:30.623 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.194 nvme0n1 00:18:31.194 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:31.194 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.194 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:31.455 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.455 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.455 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.715 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:31.715 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.715 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.715 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.715 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:31.715 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:31.715 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:31.976 nvme0n1 00:18:31.976 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:31.976 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:31.977 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.977 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.977 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.977 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: '' 2s 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: ]] 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NGMwMTdhMjMzNDE1M2VhOTcyZTQzYjgwYjY4ZmY5MGU37b5G: 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:32.238 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: 2s 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: ]] 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzY4YjZhYzdlOTBjODY3MjFhNGQ0ZjFiMTdiNDliNmQyY2UyYTBiOTUyNGI4MDZjvtEdiA==: 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:34.785 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.698 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.269 nvme0n1 00:18:37.269 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.269 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.269 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.269 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.269 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.269 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.530 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:37.530 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.530 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:37.790 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.790 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:37.790 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.790 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.790 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.790 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:37.790 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:38.050 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:38.050 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.050 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:38.050 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.050 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:38.050 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.050 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.051 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.051 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:38.051 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:38.051 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:38.051 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:38.310 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.310 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:38.310 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.310 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:38.310 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:38.570 request: 00:18:38.570 { 00:18:38.570 "name": "nvme0", 00:18:38.570 "dhchap_key": "key1", 00:18:38.570 "dhchap_ctrlr_key": "key3", 00:18:38.570 "method": "bdev_nvme_set_keys", 00:18:38.570 "req_id": 1 00:18:38.570 } 00:18:38.570 Got JSON-RPC error response 00:18:38.570 response: 00:18:38.570 { 00:18:38.570 "code": -13, 00:18:38.570 "message": "Permission denied" 00:18:38.570 } 00:18:38.570 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:38.570 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.570 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.570 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.570 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:38.570 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:38.570 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.831 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:38.831 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:39.771 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:39.771 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:39.771 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.031 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:40.031 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.032 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.032 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.032 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.032 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.032 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.032 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.602 nvme0n1 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:40.862 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:41.122 request: 00:18:41.122 { 00:18:41.122 "name": "nvme0", 00:18:41.122 "dhchap_key": "key2", 00:18:41.122 "dhchap_ctrlr_key": "key0", 00:18:41.122 "method": "bdev_nvme_set_keys", 00:18:41.122 "req_id": 1 00:18:41.122 } 00:18:41.122 Got JSON-RPC error response 00:18:41.122 response: 00:18:41.122 { 00:18:41.122 "code": -13, 00:18:41.122 "message": "Permission denied" 00:18:41.122 } 00:18:41.122 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:41.122 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:41.122 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:41.122 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:41.122 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:41.122 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:41.122 13:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.383 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:41.383 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:42.348 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:42.348 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:42.348 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1697490 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1697490 ']' 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1697490 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1697490 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1697490' 00:18:42.608 killing process with pid 1697490 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1697490 00:18:42.608 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1697490 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.868 rmmod nvme_tcp 00:18:42.868 rmmod nvme_fabrics 00:18:42.868 rmmod nvme_keyring 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1722990 ']' 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1722990 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1722990 ']' 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1722990 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1722990 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1722990' 00:18:42.868 killing process with pid 1722990 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1722990 00:18:42.868 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1722990 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.129 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.043 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:45.043 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Jjo /tmp/spdk.key-sha256.HwX /tmp/spdk.key-sha384.lvX /tmp/spdk.key-sha512.hNA /tmp/spdk.key-sha512.iFl /tmp/spdk.key-sha384.N3W /tmp/spdk.key-sha256.w7M '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:45.043 00:18:45.043 real 2m36.819s 00:18:45.043 user 5m53.088s 00:18:45.043 sys 0m24.535s 00:18:45.043 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:45.043 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.043 ************************************ 00:18:45.043 END TEST nvmf_auth_target 00:18:45.043 ************************************ 00:18:45.305 13:15:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:45.305 13:15:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:45.305 13:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:45.305 13:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:45.305 13:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.305 ************************************ 00:18:45.305 START TEST nvmf_bdevio_no_huge 00:18:45.305 ************************************ 00:18:45.305 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:45.305 * Looking for test storage... 00:18:45.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:45.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.305 --rc genhtml_branch_coverage=1 00:18:45.305 --rc genhtml_function_coverage=1 00:18:45.305 --rc genhtml_legend=1 00:18:45.305 --rc geninfo_all_blocks=1 00:18:45.305 --rc geninfo_unexecuted_blocks=1 00:18:45.305 00:18:45.305 ' 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:45.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.305 --rc genhtml_branch_coverage=1 00:18:45.305 --rc genhtml_function_coverage=1 00:18:45.305 --rc genhtml_legend=1 00:18:45.305 --rc geninfo_all_blocks=1 00:18:45.305 --rc geninfo_unexecuted_blocks=1 00:18:45.305 00:18:45.305 ' 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:45.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.305 --rc genhtml_branch_coverage=1 00:18:45.305 --rc genhtml_function_coverage=1 00:18:45.305 --rc genhtml_legend=1 00:18:45.305 --rc geninfo_all_blocks=1 00:18:45.305 --rc geninfo_unexecuted_blocks=1 00:18:45.305 00:18:45.305 ' 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:45.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.305 --rc genhtml_branch_coverage=1 00:18:45.305 --rc genhtml_function_coverage=1 00:18:45.305 --rc genhtml_legend=1 00:18:45.305 --rc geninfo_all_blocks=1 00:18:45.305 --rc geninfo_unexecuted_blocks=1 00:18:45.305 00:18:45.305 ' 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.305 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.566 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:45.567 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:53.712 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:53.712 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:53.712 Found net devices under 0000:31:00.0: cvl_0_0 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.712 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:53.713 Found net devices under 0000:31:00.1: cvl_0_1 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:53.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:18:53.713 00:18:53.713 --- 10.0.0.2 ping statistics --- 00:18:53.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.713 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:18:53.713 00:18:53.713 --- 10.0.0.1 ping statistics --- 00:18:53.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.713 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1731629 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1731629 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 1731629 ']' 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.713 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.713 [2024-11-06 13:15:34.848151] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:18:53.713 [2024-11-06 13:15:34.848221] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:53.713 [2024-11-06 13:15:34.959692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.713 [2024-11-06 13:15:35.042565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.713 [2024-11-06 13:15:35.042623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.713 [2024-11-06 13:15:35.042635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.713 [2024-11-06 13:15:35.042646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.713 [2024-11-06 13:15:35.042660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.713 [2024-11-06 13:15:35.044819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:53.713 [2024-11-06 13:15:35.044962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:53.713 [2024-11-06 13:15:35.045123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:53.713 [2024-11-06 13:15:35.045128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.975 [2024-11-06 13:15:35.715339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.975 Malloc0 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.975 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.976 [2024-11-06 13:15:35.770421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:53.976 { 00:18:53.976 "params": { 00:18:53.976 "name": "Nvme$subsystem", 00:18:53.976 "trtype": "$TEST_TRANSPORT", 00:18:53.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.976 "adrfam": "ipv4", 00:18:53.976 "trsvcid": "$NVMF_PORT", 00:18:53.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.976 "hdgst": ${hdgst:-false}, 00:18:53.976 "ddgst": ${ddgst:-false} 00:18:53.976 }, 00:18:53.976 "method": "bdev_nvme_attach_controller" 00:18:53.976 } 00:18:53.976 EOF 00:18:53.976 )") 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:53.976 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:53.976 "params": { 00:18:53.976 "name": "Nvme1", 00:18:53.976 "trtype": "tcp", 00:18:53.976 "traddr": "10.0.0.2", 00:18:53.976 "adrfam": "ipv4", 00:18:53.976 "trsvcid": "4420", 00:18:53.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.976 "hdgst": false, 00:18:53.976 "ddgst": false 00:18:53.976 }, 00:18:53.976 "method": "bdev_nvme_attach_controller" 00:18:53.976 }' 00:18:53.976 [2024-11-06 13:15:35.826761] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:18:53.976 [2024-11-06 13:15:35.826836] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1731832 ] 00:18:54.238 [2024-11-06 13:15:35.926282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:54.238 [2024-11-06 13:15:35.986877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.238 [2024-11-06 13:15:35.987506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.238 [2024-11-06 13:15:35.987509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.499 I/O targets: 00:18:54.499 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:54.499 00:18:54.499 00:18:54.499 CUnit - A unit testing framework for C - Version 2.1-3 00:18:54.499 http://cunit.sourceforge.net/ 00:18:54.499 00:18:54.499 00:18:54.499 Suite: bdevio tests on: Nvme1n1 00:18:54.499 Test: blockdev write read block ...passed 00:18:54.499 Test: blockdev write zeroes read block ...passed 00:18:54.499 Test: blockdev write zeroes read no split ...passed 00:18:54.499 Test: blockdev write zeroes read split ...passed 00:18:54.499 Test: blockdev write zeroes read split partial ...passed 00:18:54.499 Test: blockdev reset ...[2024-11-06 13:15:36.307409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:54.499 [2024-11-06 13:15:36.307514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46400 (9): Bad file descriptor 00:18:54.499 [2024-11-06 13:15:36.320150] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:54.499 passed 00:18:54.499 Test: blockdev write read 8 blocks ...passed 00:18:54.499 Test: blockdev write read size > 128k ...passed 00:18:54.499 Test: blockdev write read invalid size ...passed 00:18:54.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:54.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:54.760 Test: blockdev write read max offset ...passed 00:18:54.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:54.760 Test: blockdev writev readv 8 blocks ...passed 00:18:54.760 Test: blockdev writev readv 30 x 1block ...passed 00:18:54.760 Test: blockdev writev readv block ...passed 00:18:54.760 Test: blockdev writev readv size > 128k ...passed 00:18:54.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:54.760 Test: blockdev comparev and writev ...[2024-11-06 13:15:36.584287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.760 [2024-11-06 13:15:36.584342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:54.760 [2024-11-06 13:15:36.584360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.760 [2024-11-06 13:15:36.584369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:54.761 [2024-11-06 13:15:36.584812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.761 [2024-11-06 13:15:36.584826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:54.761 [2024-11-06 13:15:36.584841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.761 [2024-11-06 13:15:36.584849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:54.761 [2024-11-06 13:15:36.585270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.761 [2024-11-06 13:15:36.585283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:54.761 [2024-11-06 13:15:36.585297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.761 [2024-11-06 13:15:36.585305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:54.761 [2024-11-06 13:15:36.585713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.761 [2024-11-06 13:15:36.585725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:54.761 [2024-11-06 13:15:36.585739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.761 [2024-11-06 13:15:36.585753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:54.761 passed 00:18:55.022 Test: blockdev nvme passthru rw ...passed 00:18:55.022 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:15:36.670326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:55.022 [2024-11-06 13:15:36.670349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:55.022 [2024-11-06 13:15:36.670586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:55.022 [2024-11-06 13:15:36.670598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:55.022 [2024-11-06 13:15:36.670831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:55.022 [2024-11-06 13:15:36.670844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:55.022 [2024-11-06 13:15:36.671083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:55.022 [2024-11-06 13:15:36.671094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:55.022 passed 00:18:55.022 Test: blockdev nvme admin passthru ...passed 00:18:55.022 Test: blockdev copy ...passed 00:18:55.022 00:18:55.022 Run Summary: Type Total Ran Passed Failed Inactive 00:18:55.022 suites 1 1 n/a 0 0 00:18:55.022 tests 23 23 23 0 0 00:18:55.022 asserts 152 152 152 0 n/a 00:18:55.022 00:18:55.022 Elapsed time = 1.127 seconds 00:18:55.283 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:55.283 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.283 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:55.283 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.283 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:55.283 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:55.284 rmmod nvme_tcp 00:18:55.284 rmmod nvme_fabrics 00:18:55.284 rmmod nvme_keyring 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1731629 ']' 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1731629 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 1731629 ']' 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 1731629 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1731629 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1731629' 00:18:55.284 killing process with pid 1731629 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 1731629 00:18:55.284 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 1731629 00:18:55.544 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.804 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.805 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.718 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:57.718 00:18:57.718 real 0m12.544s 00:18:57.718 user 0m13.753s 00:18:57.718 sys 0m6.751s 00:18:57.718 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:57.718 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.718 ************************************ 00:18:57.718 END TEST nvmf_bdevio_no_huge 00:18:57.718 ************************************ 00:18:57.718 13:15:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:57.718 13:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:57.718 13:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:57.718 13:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:57.718 ************************************ 00:18:57.718 START TEST nvmf_tls 00:18:57.718 ************************************ 00:18:57.718 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:57.979 * Looking for test storage... 00:18:57.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:57.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.979 --rc genhtml_branch_coverage=1 00:18:57.979 --rc genhtml_function_coverage=1 00:18:57.979 --rc genhtml_legend=1 00:18:57.979 --rc geninfo_all_blocks=1 00:18:57.979 --rc geninfo_unexecuted_blocks=1 00:18:57.979 00:18:57.979 ' 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:57.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.979 --rc genhtml_branch_coverage=1 00:18:57.979 --rc genhtml_function_coverage=1 00:18:57.979 --rc genhtml_legend=1 00:18:57.979 --rc geninfo_all_blocks=1 00:18:57.979 --rc geninfo_unexecuted_blocks=1 00:18:57.979 00:18:57.979 ' 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:57.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.979 --rc genhtml_branch_coverage=1 00:18:57.979 --rc genhtml_function_coverage=1 00:18:57.979 --rc genhtml_legend=1 00:18:57.979 --rc geninfo_all_blocks=1 00:18:57.979 --rc geninfo_unexecuted_blocks=1 00:18:57.979 00:18:57.979 ' 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:57.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.979 --rc genhtml_branch_coverage=1 00:18:57.979 --rc genhtml_function_coverage=1 00:18:57.979 --rc genhtml_legend=1 00:18:57.979 --rc geninfo_all_blocks=1 00:18:57.979 --rc geninfo_unexecuted_blocks=1 00:18:57.979 00:18:57.979 ' 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.979 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:57.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:57.980 13:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:06.129 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:06.129 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.129 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:06.130 Found net devices under 0000:31:00.0: cvl_0_0 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:06.130 Found net devices under 0000:31:00.1: cvl_0_1 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:06.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:19:06.130 00:19:06.130 --- 10.0.0.2 ping statistics --- 00:19:06.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.130 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:19:06.130 00:19:06.130 --- 10.0.0.1 ping statistics --- 00:19:06.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.130 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1736360 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1736360 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1736360 ']' 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.130 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.130 [2024-11-06 13:15:47.558538] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:06.130 [2024-11-06 13:15:47.558599] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.130 [2024-11-06 13:15:47.660628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.130 [2024-11-06 13:15:47.711107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.130 [2024-11-06 13:15:47.711162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.130 [2024-11-06 13:15:47.711171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.130 [2024-11-06 13:15:47.711178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.130 [2024-11-06 13:15:47.711184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.130 [2024-11-06 13:15:47.712024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.702 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:06.702 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:06.702 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.702 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:06.702 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.702 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.702 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:06.702 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:06.964 true 00:19:06.964 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:06.964 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:06.964 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:06.964 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:06.964 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:07.225 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.225 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:07.487 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:07.487 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:07.487 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:07.487 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:07.487 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:07.747 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:07.748 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:07.748 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:07.748 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:08.008 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:08.008 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:08.008 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:08.008 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:08.008 13:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:08.269 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:08.269 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:08.269 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:08.530 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:08.530 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Ilr0TvY1gn 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Z5W92Wv1qz 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Ilr0TvY1gn 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Z5W92Wv1qz 00:19:08.792 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:09.054 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:09.315 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Ilr0TvY1gn 00:19:09.315 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ilr0TvY1gn 00:19:09.315 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:09.315 [2024-11-06 13:15:51.114828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.315 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:09.575 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:09.575 [2024-11-06 13:15:51.435601] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.575 [2024-11-06 13:15:51.435811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.575 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:09.836 malloc0 00:19:09.836 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:10.096 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ilr0TvY1gn 00:19:10.096 13:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.358 13:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Ilr0TvY1gn 00:19:20.354 Initializing NVMe Controllers 00:19:20.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:20.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:20.354 Initialization complete. Launching workers. 00:19:20.354 ======================================================== 00:19:20.354 Latency(us) 00:19:20.354 Device Information : IOPS MiB/s Average min max 00:19:20.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18832.04 73.56 3398.65 1116.08 4047.37 00:19:20.354 ======================================================== 00:19:20.354 Total : 18832.04 73.56 3398.65 1116.08 4047.37 00:19:20.354 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ilr0TvY1gn 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ilr0TvY1gn 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1739159 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1739159 /var/tmp/bdevperf.sock 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1739159 ']' 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:20.354 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.614 [2024-11-06 13:16:02.296454] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:20.614 [2024-11-06 13:16:02.296510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739159 ] 00:19:20.614 [2024-11-06 13:16:02.383929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.614 [2024-11-06 13:16:02.419475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.201 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:21.201 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:21.201 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ilr0TvY1gn 00:19:21.478 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:21.788 [2024-11-06 13:16:03.383100] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.788 TLSTESTn1 00:19:21.788 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:21.788 Running I/O for 10 seconds... 00:19:24.113 4328.00 IOPS, 16.91 MiB/s [2024-11-06T12:16:06.585Z] 4895.00 IOPS, 19.12 MiB/s [2024-11-06T12:16:07.969Z] 5152.33 IOPS, 20.13 MiB/s [2024-11-06T12:16:08.911Z] 5091.75 IOPS, 19.89 MiB/s [2024-11-06T12:16:09.852Z] 5245.20 IOPS, 20.49 MiB/s [2024-11-06T12:16:10.795Z] 5413.83 IOPS, 21.15 MiB/s [2024-11-06T12:16:11.738Z] 5572.71 IOPS, 21.77 MiB/s [2024-11-06T12:16:12.681Z] 5631.62 IOPS, 22.00 MiB/s [2024-11-06T12:16:13.623Z] 5634.44 IOPS, 22.01 MiB/s [2024-11-06T12:16:13.883Z] 5622.20 IOPS, 21.96 MiB/s 00:19:31.981 Latency(us) 00:19:31.981 [2024-11-06T12:16:13.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.981 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.981 Verification LBA range: start 0x0 length 0x2000 00:19:31.981 TLSTESTn1 : 10.03 5617.65 21.94 0.00 0.00 22731.46 5870.93 29272.75 00:19:31.981 [2024-11-06T12:16:13.883Z] =================================================================================================================== 00:19:31.981 [2024-11-06T12:16:13.883Z] Total : 5617.65 21.94 0.00 0.00 22731.46 5870.93 29272.75 00:19:31.981 { 00:19:31.981 "results": [ 00:19:31.981 { 00:19:31.981 "job": "TLSTESTn1", 00:19:31.981 "core_mask": "0x4", 00:19:31.981 "workload": "verify", 00:19:31.981 "status": "finished", 00:19:31.981 "verify_range": { 00:19:31.981 "start": 0, 00:19:31.981 "length": 8192 00:19:31.981 }, 00:19:31.981 "queue_depth": 128, 00:19:31.981 "io_size": 4096, 00:19:31.981 "runtime": 10.030699, 00:19:31.981 "iops": 5617.654362871421, 00:19:31.981 "mibps": 21.943962354966487, 00:19:31.981 "io_failed": 0, 00:19:31.981 "io_timeout": 0, 00:19:31.981 "avg_latency_us": 22731.456207090334, 00:19:31.981 "min_latency_us": 5870.933333333333, 00:19:31.981 "max_latency_us": 29272.746666666666 00:19:31.981 } 00:19:31.981 ], 00:19:31.981 "core_count": 1 00:19:31.981 } 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1739159 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1739159 ']' 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1739159 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1739159 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1739159' 00:19:31.981 killing process with pid 1739159 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1739159 00:19:31.981 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.981 00:19:31.981 Latency(us) 00:19:31.981 [2024-11-06T12:16:13.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.981 [2024-11-06T12:16:13.883Z] =================================================================================================================== 00:19:31.981 [2024-11-06T12:16:13.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1739159 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z5W92Wv1qz 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z5W92Wv1qz 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:31.981 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z5W92Wv1qz 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Z5W92Wv1qz 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1741445 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1741445 /var/tmp/bdevperf.sock 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1741445 ']' 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:31.982 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.982 [2024-11-06 13:16:13.878296] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:31.982 [2024-11-06 13:16:13.878359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741445 ] 00:19:32.242 [2024-11-06 13:16:13.962008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.242 [2024-11-06 13:16:13.990599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.813 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.813 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:32.813 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z5W92Wv1qz 00:19:33.074 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.074 [2024-11-06 13:16:14.969022] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.335 [2024-11-06 13:16:14.978713] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:33.335 [2024-11-06 13:16:14.979240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cfbc0 (107): Transport endpoint is not connected 00:19:33.335 [2024-11-06 13:16:14.980236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cfbc0 (9): Bad file descriptor 00:19:33.335 [2024-11-06 13:16:14.981238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:33.335 [2024-11-06 13:16:14.981247] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:33.335 [2024-11-06 13:16:14.981253] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:33.335 [2024-11-06 13:16:14.981262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:33.335 request: 00:19:33.335 { 00:19:33.335 "name": "TLSTEST", 00:19:33.335 "trtype": "tcp", 00:19:33.335 "traddr": "10.0.0.2", 00:19:33.335 "adrfam": "ipv4", 00:19:33.335 "trsvcid": "4420", 00:19:33.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.335 "prchk_reftag": false, 00:19:33.335 "prchk_guard": false, 00:19:33.335 "hdgst": false, 00:19:33.335 "ddgst": false, 00:19:33.335 "psk": "key0", 00:19:33.335 "allow_unrecognized_csi": false, 00:19:33.335 "method": "bdev_nvme_attach_controller", 00:19:33.335 "req_id": 1 00:19:33.335 } 00:19:33.335 Got JSON-RPC error response 00:19:33.335 response: 00:19:33.335 { 00:19:33.335 "code": -5, 00:19:33.335 "message": "Input/output error" 00:19:33.335 } 00:19:33.335 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1741445 00:19:33.335 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1741445 ']' 00:19:33.335 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1741445 00:19:33.335 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1741445 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1741445' 00:19:33.335 killing process with pid 1741445 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1741445 00:19:33.335 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.335 00:19:33.335 Latency(us) 00:19:33.335 [2024-11-06T12:16:15.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.335 [2024-11-06T12:16:15.237Z] =================================================================================================================== 00:19:33.335 [2024-11-06T12:16:15.237Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1741445 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ilr0TvY1gn 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ilr0TvY1gn 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ilr0TvY1gn 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ilr0TvY1gn 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1741791 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1741791 /var/tmp/bdevperf.sock 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1741791 ']' 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.335 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.335 [2024-11-06 13:16:15.219316] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:33.336 [2024-11-06 13:16:15.219372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741791 ] 00:19:33.595 [2024-11-06 13:16:15.304369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.596 [2024-11-06 13:16:15.331916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.166 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:34.166 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:34.166 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ilr0TvY1gn 00:19:34.427 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:34.689 [2024-11-06 13:16:16.346652] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.689 [2024-11-06 13:16:16.351959] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:34.689 [2024-11-06 13:16:16.351981] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:34.689 [2024-11-06 13:16:16.352000] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:34.689 [2024-11-06 13:16:16.352829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbdbc0 (107): Transport endpoint is not connected 00:19:34.689 [2024-11-06 13:16:16.353826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbdbc0 (9): Bad file descriptor 00:19:34.689 [2024-11-06 13:16:16.354828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:34.689 [2024-11-06 13:16:16.354837] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:34.689 [2024-11-06 13:16:16.354843] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:34.689 [2024-11-06 13:16:16.354851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:34.689 request: 00:19:34.689 { 00:19:34.689 "name": "TLSTEST", 00:19:34.689 "trtype": "tcp", 00:19:34.689 "traddr": "10.0.0.2", 00:19:34.689 "adrfam": "ipv4", 00:19:34.689 "trsvcid": "4420", 00:19:34.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.689 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:34.689 "prchk_reftag": false, 00:19:34.689 "prchk_guard": false, 00:19:34.689 "hdgst": false, 00:19:34.689 "ddgst": false, 00:19:34.689 "psk": "key0", 00:19:34.689 "allow_unrecognized_csi": false, 00:19:34.689 "method": "bdev_nvme_attach_controller", 00:19:34.689 "req_id": 1 00:19:34.689 } 00:19:34.689 Got JSON-RPC error response 00:19:34.689 response: 00:19:34.689 { 00:19:34.689 "code": -5, 00:19:34.689 "message": "Input/output error" 00:19:34.689 } 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1741791 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1741791 ']' 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1741791 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1741791 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1741791' 00:19:34.689 killing process with pid 1741791 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1741791 00:19:34.689 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.689 00:19:34.689 Latency(us) 00:19:34.689 [2024-11-06T12:16:16.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.689 [2024-11-06T12:16:16.591Z] =================================================================================================================== 00:19:34.689 [2024-11-06T12:16:16.591Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1741791 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ilr0TvY1gn 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ilr0TvY1gn 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ilr0TvY1gn 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ilr0TvY1gn 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1742061 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1742061 /var/tmp/bdevperf.sock 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1742061 ']' 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.689 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.690 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.690 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.690 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.950 [2024-11-06 13:16:16.598771] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:34.950 [2024-11-06 13:16:16.598825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742061 ] 00:19:34.950 [2024-11-06 13:16:16.682179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.950 [2024-11-06 13:16:16.710781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.522 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.522 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:35.522 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ilr0TvY1gn 00:19:35.783 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.044 [2024-11-06 13:16:17.733556] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.044 [2024-11-06 13:16:17.738028] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:36.044 [2024-11-06 13:16:17.738050] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:36.044 [2024-11-06 13:16:17.738068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:36.044 [2024-11-06 13:16:17.738704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf99bc0 (107): Transport endpoint is not connected 00:19:36.044 [2024-11-06 13:16:17.739699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf99bc0 (9): Bad file descriptor 00:19:36.044 [2024-11-06 13:16:17.740701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:36.044 [2024-11-06 13:16:17.740709] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:36.044 [2024-11-06 13:16:17.740716] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:36.044 [2024-11-06 13:16:17.740724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:36.044 request: 00:19:36.044 { 00:19:36.044 "name": "TLSTEST", 00:19:36.044 "trtype": "tcp", 00:19:36.044 "traddr": "10.0.0.2", 00:19:36.044 "adrfam": "ipv4", 00:19:36.044 "trsvcid": "4420", 00:19:36.044 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:36.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.044 "prchk_reftag": false, 00:19:36.044 "prchk_guard": false, 00:19:36.044 "hdgst": false, 00:19:36.044 "ddgst": false, 00:19:36.044 "psk": "key0", 00:19:36.044 "allow_unrecognized_csi": false, 00:19:36.044 "method": "bdev_nvme_attach_controller", 00:19:36.044 "req_id": 1 00:19:36.044 } 00:19:36.044 Got JSON-RPC error response 00:19:36.044 response: 00:19:36.044 { 00:19:36.044 "code": -5, 00:19:36.044 "message": "Input/output error" 00:19:36.044 } 00:19:36.044 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1742061 00:19:36.044 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1742061 ']' 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1742061 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1742061 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1742061' 00:19:36.045 killing process with pid 1742061 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1742061 00:19:36.045 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.045 00:19:36.045 Latency(us) 00:19:36.045 [2024-11-06T12:16:17.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.045 [2024-11-06T12:16:17.947Z] =================================================================================================================== 00:19:36.045 [2024-11-06T12:16:17.947Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1742061 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1742229 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1742229 /var/tmp/bdevperf.sock 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1742229 ']' 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:36.045 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.305 [2024-11-06 13:16:17.981696] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:36.305 [2024-11-06 13:16:17.981761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742229 ] 00:19:36.305 [2024-11-06 13:16:18.067615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.306 [2024-11-06 13:16:18.095528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.247 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.247 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:37.247 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:37.247 [2024-11-06 13:16:18.925787] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:37.247 [2024-11-06 13:16:18.925813] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:37.247 request: 00:19:37.247 { 00:19:37.247 "name": "key0", 00:19:37.247 "path": "", 00:19:37.247 "method": "keyring_file_add_key", 00:19:37.247 "req_id": 1 00:19:37.247 } 00:19:37.247 Got JSON-RPC error response 00:19:37.247 response: 00:19:37.247 { 00:19:37.247 "code": -1, 00:19:37.247 "message": "Operation not permitted" 00:19:37.247 } 00:19:37.247 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.247 [2024-11-06 13:16:19.110336] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.247 [2024-11-06 13:16:19.110355] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:37.247 request: 00:19:37.247 { 00:19:37.247 "name": "TLSTEST", 00:19:37.247 "trtype": "tcp", 00:19:37.247 "traddr": "10.0.0.2", 00:19:37.247 "adrfam": "ipv4", 00:19:37.247 "trsvcid": "4420", 00:19:37.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.247 "prchk_reftag": false, 00:19:37.247 "prchk_guard": false, 00:19:37.247 "hdgst": false, 00:19:37.247 "ddgst": false, 00:19:37.247 "psk": "key0", 00:19:37.247 "allow_unrecognized_csi": false, 00:19:37.247 "method": "bdev_nvme_attach_controller", 00:19:37.247 "req_id": 1 00:19:37.247 } 00:19:37.247 Got JSON-RPC error response 00:19:37.247 response: 00:19:37.247 { 00:19:37.247 "code": -126, 00:19:37.247 "message": "Required key not available" 00:19:37.247 } 00:19:37.247 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1742229 00:19:37.247 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1742229 ']' 00:19:37.247 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1742229 00:19:37.247 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:37.247 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1742229 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1742229' 00:19:37.509 killing process with pid 1742229 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1742229 00:19:37.509 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.509 00:19:37.509 Latency(us) 00:19:37.509 [2024-11-06T12:16:19.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.509 [2024-11-06T12:16:19.411Z] =================================================================================================================== 00:19:37.509 [2024-11-06T12:16:19.411Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1742229 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1736360 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1736360 ']' 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1736360 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1736360 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1736360' 00:19:37.509 killing process with pid 1736360 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1736360 00:19:37.509 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1736360 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.qz2gFpl54n 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.qz2gFpl54n 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1742536 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1742536 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1742536 ']' 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.770 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.770 [2024-11-06 13:16:19.582789] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:37.770 [2024-11-06 13:16:19.582848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.030 [2024-11-06 13:16:19.675442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.030 [2024-11-06 13:16:19.712253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.030 [2024-11-06 13:16:19.712296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.030 [2024-11-06 13:16:19.712303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.030 [2024-11-06 13:16:19.712308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.030 [2024-11-06 13:16:19.712313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.030 [2024-11-06 13:16:19.712943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.600 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.600 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:38.600 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:38.600 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.600 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.600 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.600 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.qz2gFpl54n 00:19:38.600 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qz2gFpl54n 00:19:38.600 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.861 [2024-11-06 13:16:20.593917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.861 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:39.121 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:39.121 [2024-11-06 13:16:20.954801] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.121 [2024-11-06 13:16:20.954988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.121 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:39.382 malloc0 00:19:39.382 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:39.643 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qz2gFpl54n 00:19:39.643 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qz2gFpl54n 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qz2gFpl54n 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1743080 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1743080 /var/tmp/bdevperf.sock 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1743080 ']' 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:39.904 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.904 [2024-11-06 13:16:21.742740] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:39.904 [2024-11-06 13:16:21.742817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743080 ] 00:19:40.165 [2024-11-06 13:16:21.827900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.165 [2024-11-06 13:16:21.856984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.736 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:40.736 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:40.736 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qz2gFpl54n 00:19:40.997 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.997 [2024-11-06 13:16:22.863812] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.258 TLSTESTn1 00:19:41.259 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:41.259 Running I/O for 10 seconds... 00:19:43.586 5004.00 IOPS, 19.55 MiB/s [2024-11-06T12:16:26.059Z] 5413.50 IOPS, 21.15 MiB/s [2024-11-06T12:16:27.445Z] 5744.00 IOPS, 22.44 MiB/s [2024-11-06T12:16:28.388Z] 5824.25 IOPS, 22.75 MiB/s [2024-11-06T12:16:29.329Z] 5726.60 IOPS, 22.37 MiB/s [2024-11-06T12:16:30.272Z] 5798.83 IOPS, 22.65 MiB/s [2024-11-06T12:16:31.213Z] 5909.00 IOPS, 23.08 MiB/s [2024-11-06T12:16:32.155Z] 5853.50 IOPS, 22.87 MiB/s [2024-11-06T12:16:33.099Z] 5688.33 IOPS, 22.22 MiB/s [2024-11-06T12:16:33.361Z] 5650.50 IOPS, 22.07 MiB/s 00:19:51.459 Latency(us) 00:19:51.459 [2024-11-06T12:16:33.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.459 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:51.459 Verification LBA range: start 0x0 length 0x2000 00:19:51.459 TLSTESTn1 : 10.04 5641.50 22.04 0.00 0.00 22633.80 5324.80 33860.27 00:19:51.459 [2024-11-06T12:16:33.361Z] =================================================================================================================== 00:19:51.459 [2024-11-06T12:16:33.361Z] Total : 5641.50 22.04 0.00 0.00 22633.80 5324.80 33860.27 00:19:51.459 { 00:19:51.459 "results": [ 00:19:51.459 { 00:19:51.459 "job": "TLSTESTn1", 00:19:51.459 "core_mask": "0x4", 00:19:51.459 "workload": "verify", 00:19:51.459 "status": "finished", 00:19:51.459 "verify_range": { 00:19:51.459 "start": 0, 00:19:51.459 "length": 8192 00:19:51.459 }, 00:19:51.459 "queue_depth": 128, 00:19:51.459 "io_size": 4096, 00:19:51.459 "runtime": 10.038462, 00:19:51.459 "iops": 5641.501656329426, 00:19:51.459 "mibps": 22.03711584503682, 00:19:51.459 "io_failed": 0, 00:19:51.459 "io_timeout": 0, 00:19:51.459 "avg_latency_us": 22633.80344869803, 00:19:51.459 "min_latency_us": 5324.8, 00:19:51.459 "max_latency_us": 33860.26666666667 00:19:51.459 } 00:19:51.459 ], 00:19:51.459 "core_count": 1 00:19:51.459 } 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1743080 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1743080 ']' 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1743080 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1743080 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1743080' 00:19:51.459 killing process with pid 1743080 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1743080 00:19:51.459 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.459 00:19:51.459 Latency(us) 00:19:51.459 [2024-11-06T12:16:33.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.459 [2024-11-06T12:16:33.361Z] =================================================================================================================== 00:19:51.459 [2024-11-06T12:16:33.361Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1743080 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.qz2gFpl54n 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qz2gFpl54n 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qz2gFpl54n 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qz2gFpl54n 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qz2gFpl54n 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1745217 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1745217 /var/tmp/bdevperf.sock 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1745217 ']' 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.459 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.721 [2024-11-06 13:16:33.369827] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:51.721 [2024-11-06 13:16:33.369883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745217 ] 00:19:51.721 [2024-11-06 13:16:33.455902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.721 [2024-11-06 13:16:33.484518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.293 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.293 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:52.293 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qz2gFpl54n 00:19:52.554 [2024-11-06 13:16:34.318881] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qz2gFpl54n': 0100666 00:19:52.554 [2024-11-06 13:16:34.318906] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:52.554 request: 00:19:52.554 { 00:19:52.554 "name": "key0", 00:19:52.554 "path": "/tmp/tmp.qz2gFpl54n", 00:19:52.554 "method": "keyring_file_add_key", 00:19:52.554 "req_id": 1 00:19:52.554 } 00:19:52.554 Got JSON-RPC error response 00:19:52.554 response: 00:19:52.554 { 00:19:52.554 "code": -1, 00:19:52.554 "message": "Operation not permitted" 00:19:52.554 } 00:19:52.554 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:52.815 [2024-11-06 13:16:34.499407] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.815 [2024-11-06 13:16:34.499426] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:52.815 request: 00:19:52.815 { 00:19:52.815 "name": "TLSTEST", 00:19:52.815 "trtype": "tcp", 00:19:52.815 "traddr": "10.0.0.2", 00:19:52.815 "adrfam": "ipv4", 00:19:52.815 "trsvcid": "4420", 00:19:52.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.815 "prchk_reftag": false, 00:19:52.815 "prchk_guard": false, 00:19:52.815 "hdgst": false, 00:19:52.815 "ddgst": false, 00:19:52.815 "psk": "key0", 00:19:52.815 "allow_unrecognized_csi": false, 00:19:52.815 "method": "bdev_nvme_attach_controller", 00:19:52.815 "req_id": 1 00:19:52.815 } 00:19:52.815 Got JSON-RPC error response 00:19:52.815 response: 00:19:52.815 { 00:19:52.815 "code": -126, 00:19:52.815 "message": "Required key not available" 00:19:52.815 } 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1745217 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1745217 ']' 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1745217 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1745217 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1745217' 00:19:52.815 killing process with pid 1745217 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1745217 00:19:52.815 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.815 00:19:52.815 Latency(us) 00:19:52.815 [2024-11-06T12:16:34.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.815 [2024-11-06T12:16:34.717Z] =================================================================================================================== 00:19:52.815 [2024-11-06T12:16:34.717Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1745217 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1742536 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1742536 ']' 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1742536 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.815 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1742536 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1742536' 00:19:53.076 killing process with pid 1742536 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1742536 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1742536 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1745570 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1745570 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1745570 ']' 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.076 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.076 [2024-11-06 13:16:34.922106] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:53.076 [2024-11-06 13:16:34.922162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.337 [2024-11-06 13:16:35.012012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.337 [2024-11-06 13:16:35.039701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.337 [2024-11-06 13:16:35.039731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.337 [2024-11-06 13:16:35.039737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.337 [2024-11-06 13:16:35.039742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.337 [2024-11-06 13:16:35.039751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.337 [2024-11-06 13:16:35.040226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.qz2gFpl54n 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.qz2gFpl54n 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.qz2gFpl54n 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qz2gFpl54n 00:19:53.910 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:54.170 [2024-11-06 13:16:35.923496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.170 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:54.431 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:54.431 [2024-11-06 13:16:36.280366] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.431 [2024-11-06 13:16:36.280556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.431 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.692 malloc0 00:19:54.692 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.953 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qz2gFpl54n 00:19:54.953 [2024-11-06 13:16:36.811287] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qz2gFpl54n': 0100666 00:19:54.953 [2024-11-06 13:16:36.811303] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:54.953 request: 00:19:54.953 { 00:19:54.953 "name": "key0", 00:19:54.953 "path": "/tmp/tmp.qz2gFpl54n", 00:19:54.953 "method": "keyring_file_add_key", 00:19:54.953 "req_id": 1 00:19:54.953 } 00:19:54.953 Got JSON-RPC error response 00:19:54.953 response: 00:19:54.953 { 00:19:54.953 "code": -1, 00:19:54.953 "message": "Operation not permitted" 00:19:54.953 } 00:19:54.953 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.214 [2024-11-06 13:16:36.987743] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:55.214 [2024-11-06 13:16:36.987772] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:55.214 request: 00:19:55.214 { 00:19:55.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.214 "host": "nqn.2016-06.io.spdk:host1", 00:19:55.214 "psk": "key0", 00:19:55.214 "method": "nvmf_subsystem_add_host", 00:19:55.214 "req_id": 1 00:19:55.214 } 00:19:55.214 Got JSON-RPC error response 00:19:55.214 response: 00:19:55.214 { 00:19:55.214 "code": -32603, 00:19:55.214 "message": "Internal error" 00:19:55.214 } 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1745570 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1745570 ']' 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1745570 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1745570 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1745570' 00:19:55.214 killing process with pid 1745570 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1745570 00:19:55.214 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1745570 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.qz2gFpl54n 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1746070 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1746070 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1746070 ']' 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:55.476 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.476 [2024-11-06 13:16:37.269381] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:55.477 [2024-11-06 13:16:37.269440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.477 [2024-11-06 13:16:37.363103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.737 [2024-11-06 13:16:37.399825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.737 [2024-11-06 13:16:37.399865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.737 [2024-11-06 13:16:37.399871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.737 [2024-11-06 13:16:37.399876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.737 [2024-11-06 13:16:37.399881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.737 [2024-11-06 13:16:37.400470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.308 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:56.308 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:56.308 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:56.308 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:56.308 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.308 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.308 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.qz2gFpl54n 00:19:56.308 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qz2gFpl54n 00:19:56.308 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:56.568 [2024-11-06 13:16:38.256900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.568 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:56.568 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:56.829 [2024-11-06 13:16:38.605761] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.829 [2024-11-06 13:16:38.605950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.829 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.090 malloc0 00:19:57.090 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:57.090 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qz2gFpl54n 00:19:57.351 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1746607 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1746607 /var/tmp/bdevperf.sock 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1746607 ']' 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:57.611 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.611 [2024-11-06 13:16:39.336386] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:57.611 [2024-11-06 13:16:39.336430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746607 ] 00:19:57.611 [2024-11-06 13:16:39.418893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.611 [2024-11-06 13:16:39.453945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.872 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:57.872 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:57.872 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qz2gFpl54n 00:19:57.872 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.132 [2024-11-06 13:16:39.847824] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.132 TLSTESTn1 00:19:58.132 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:58.392 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:58.392 "subsystems": [ 00:19:58.392 { 00:19:58.392 "subsystem": "keyring", 00:19:58.392 "config": [ 00:19:58.392 { 00:19:58.392 "method": "keyring_file_add_key", 00:19:58.392 "params": { 00:19:58.392 "name": "key0", 00:19:58.392 "path": "/tmp/tmp.qz2gFpl54n" 00:19:58.392 } 00:19:58.392 } 00:19:58.392 ] 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "subsystem": "iobuf", 00:19:58.392 "config": [ 00:19:58.392 { 00:19:58.392 "method": "iobuf_set_options", 00:19:58.392 "params": { 00:19:58.392 "small_pool_count": 8192, 00:19:58.392 "large_pool_count": 1024, 00:19:58.392 "small_bufsize": 8192, 00:19:58.392 "large_bufsize": 135168, 00:19:58.392 "enable_numa": false 00:19:58.392 } 00:19:58.392 } 00:19:58.392 ] 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "subsystem": "sock", 00:19:58.392 "config": [ 00:19:58.392 { 00:19:58.392 "method": "sock_set_default_impl", 00:19:58.392 "params": { 00:19:58.392 "impl_name": "posix" 00:19:58.392 } 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "method": "sock_impl_set_options", 00:19:58.392 "params": { 00:19:58.392 "impl_name": "ssl", 00:19:58.392 "recv_buf_size": 4096, 00:19:58.392 "send_buf_size": 4096, 00:19:58.392 "enable_recv_pipe": true, 00:19:58.392 "enable_quickack": false, 00:19:58.392 "enable_placement_id": 0, 00:19:58.392 "enable_zerocopy_send_server": true, 00:19:58.392 "enable_zerocopy_send_client": false, 00:19:58.392 "zerocopy_threshold": 0, 00:19:58.392 "tls_version": 0, 00:19:58.392 "enable_ktls": false 00:19:58.392 } 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "method": "sock_impl_set_options", 00:19:58.392 "params": { 00:19:58.392 "impl_name": "posix", 00:19:58.392 "recv_buf_size": 2097152, 00:19:58.392 "send_buf_size": 2097152, 00:19:58.392 "enable_recv_pipe": true, 00:19:58.392 "enable_quickack": false, 00:19:58.392 "enable_placement_id": 0, 00:19:58.392 "enable_zerocopy_send_server": true, 00:19:58.392 "enable_zerocopy_send_client": false, 00:19:58.392 "zerocopy_threshold": 0, 00:19:58.392 "tls_version": 0, 00:19:58.392 "enable_ktls": false 00:19:58.392 } 00:19:58.392 } 00:19:58.392 ] 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "subsystem": "vmd", 00:19:58.392 "config": [] 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "subsystem": "accel", 00:19:58.392 "config": [ 00:19:58.392 { 00:19:58.392 "method": "accel_set_options", 00:19:58.392 "params": { 00:19:58.392 "small_cache_size": 128, 00:19:58.392 "large_cache_size": 16, 00:19:58.392 "task_count": 2048, 00:19:58.392 "sequence_count": 2048, 00:19:58.392 "buf_count": 2048 00:19:58.392 } 00:19:58.392 } 00:19:58.392 ] 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "subsystem": "bdev", 00:19:58.392 "config": [ 00:19:58.392 { 00:19:58.392 "method": "bdev_set_options", 00:19:58.392 "params": { 00:19:58.392 "bdev_io_pool_size": 65535, 00:19:58.392 "bdev_io_cache_size": 256, 00:19:58.392 "bdev_auto_examine": true, 00:19:58.392 "iobuf_small_cache_size": 128, 00:19:58.392 "iobuf_large_cache_size": 16 00:19:58.392 } 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "method": "bdev_raid_set_options", 00:19:58.392 "params": { 00:19:58.392 "process_window_size_kb": 1024, 00:19:58.392 "process_max_bandwidth_mb_sec": 0 00:19:58.392 } 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "method": "bdev_iscsi_set_options", 00:19:58.392 "params": { 00:19:58.392 "timeout_sec": 30 00:19:58.392 } 00:19:58.392 }, 00:19:58.392 { 00:19:58.392 "method": "bdev_nvme_set_options", 00:19:58.392 "params": { 00:19:58.392 "action_on_timeout": "none", 00:19:58.392 "timeout_us": 0, 00:19:58.392 "timeout_admin_us": 0, 00:19:58.392 "keep_alive_timeout_ms": 10000, 00:19:58.392 "arbitration_burst": 0, 00:19:58.392 "low_priority_weight": 0, 00:19:58.392 "medium_priority_weight": 0, 00:19:58.392 "high_priority_weight": 0, 00:19:58.392 "nvme_adminq_poll_period_us": 10000, 00:19:58.392 "nvme_ioq_poll_period_us": 0, 00:19:58.392 "io_queue_requests": 0, 00:19:58.392 "delay_cmd_submit": true, 00:19:58.392 "transport_retry_count": 4, 00:19:58.392 "bdev_retry_count": 3, 00:19:58.392 "transport_ack_timeout": 0, 00:19:58.392 "ctrlr_loss_timeout_sec": 0, 00:19:58.392 "reconnect_delay_sec": 0, 00:19:58.392 "fast_io_fail_timeout_sec": 0, 00:19:58.392 "disable_auto_failback": false, 00:19:58.392 "generate_uuids": false, 00:19:58.392 "transport_tos": 0, 00:19:58.392 "nvme_error_stat": false, 00:19:58.392 "rdma_srq_size": 0, 00:19:58.392 "io_path_stat": false, 00:19:58.392 "allow_accel_sequence": false, 00:19:58.392 "rdma_max_cq_size": 0, 00:19:58.392 "rdma_cm_event_timeout_ms": 0, 00:19:58.392 "dhchap_digests": [ 00:19:58.392 "sha256", 00:19:58.392 "sha384", 00:19:58.392 "sha512" 00:19:58.392 ], 00:19:58.392 "dhchap_dhgroups": [ 00:19:58.392 "null", 00:19:58.392 "ffdhe2048", 00:19:58.392 "ffdhe3072", 00:19:58.392 "ffdhe4096", 00:19:58.392 "ffdhe6144", 00:19:58.392 "ffdhe8192" 00:19:58.392 ] 00:19:58.392 } 00:19:58.392 }, 00:19:58.392 { 00:19:58.393 "method": "bdev_nvme_set_hotplug", 00:19:58.393 "params": { 00:19:58.393 "period_us": 100000, 00:19:58.393 "enable": false 00:19:58.393 } 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "method": "bdev_malloc_create", 00:19:58.393 "params": { 00:19:58.393 "name": "malloc0", 00:19:58.393 "num_blocks": 8192, 00:19:58.393 "block_size": 4096, 00:19:58.393 "physical_block_size": 4096, 00:19:58.393 "uuid": "8d884b9b-86a2-4c3a-8627-1e56b6806baa", 00:19:58.393 "optimal_io_boundary": 0, 00:19:58.393 "md_size": 0, 00:19:58.393 "dif_type": 0, 00:19:58.393 "dif_is_head_of_md": false, 00:19:58.393 "dif_pi_format": 0 00:19:58.393 } 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "method": "bdev_wait_for_examine" 00:19:58.393 } 00:19:58.393 ] 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "subsystem": "nbd", 00:19:58.393 "config": [] 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "subsystem": "scheduler", 00:19:58.393 "config": [ 00:19:58.393 { 00:19:58.393 "method": "framework_set_scheduler", 00:19:58.393 "params": { 00:19:58.393 "name": "static" 00:19:58.393 } 00:19:58.393 } 00:19:58.393 ] 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "subsystem": "nvmf", 00:19:58.393 "config": [ 00:19:58.393 { 00:19:58.393 "method": "nvmf_set_config", 00:19:58.393 "params": { 00:19:58.393 "discovery_filter": "match_any", 00:19:58.393 "admin_cmd_passthru": { 00:19:58.393 "identify_ctrlr": false 00:19:58.393 }, 00:19:58.393 "dhchap_digests": [ 00:19:58.393 "sha256", 00:19:58.393 "sha384", 00:19:58.393 "sha512" 00:19:58.393 ], 00:19:58.393 "dhchap_dhgroups": [ 00:19:58.393 "null", 00:19:58.393 "ffdhe2048", 00:19:58.393 "ffdhe3072", 00:19:58.393 "ffdhe4096", 00:19:58.393 "ffdhe6144", 00:19:58.393 "ffdhe8192" 00:19:58.393 ] 00:19:58.393 } 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "method": "nvmf_set_max_subsystems", 00:19:58.393 "params": { 00:19:58.393 "max_subsystems": 1024 00:19:58.393 } 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "method": "nvmf_set_crdt", 00:19:58.393 "params": { 00:19:58.393 "crdt1": 0, 00:19:58.393 "crdt2": 0, 00:19:58.393 "crdt3": 0 00:19:58.393 } 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "method": "nvmf_create_transport", 00:19:58.393 "params": { 00:19:58.393 "trtype": "TCP", 00:19:58.393 "max_queue_depth": 128, 00:19:58.393 "max_io_qpairs_per_ctrlr": 127, 00:19:58.393 "in_capsule_data_size": 4096, 00:19:58.393 "max_io_size": 131072, 00:19:58.393 "io_unit_size": 131072, 00:19:58.393 "max_aq_depth": 128, 00:19:58.393 "num_shared_buffers": 511, 00:19:58.393 "buf_cache_size": 4294967295, 00:19:58.393 "dif_insert_or_strip": false, 00:19:58.393 "zcopy": false, 00:19:58.393 "c2h_success": false, 00:19:58.393 "sock_priority": 0, 00:19:58.393 "abort_timeout_sec": 1, 00:19:58.393 "ack_timeout": 0, 00:19:58.393 "data_wr_pool_size": 0 00:19:58.393 } 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "method": "nvmf_create_subsystem", 00:19:58.393 "params": { 00:19:58.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.393 "allow_any_host": false, 00:19:58.393 "serial_number": "SPDK00000000000001", 00:19:58.393 "model_number": "SPDK bdev Controller", 00:19:58.393 "max_namespaces": 10, 00:19:58.393 "min_cntlid": 1, 00:19:58.393 "max_cntlid": 65519, 00:19:58.393 "ana_reporting": false 00:19:58.393 } 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "method": "nvmf_subsystem_add_host", 00:19:58.393 "params": { 00:19:58.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.393 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.393 "psk": "key0" 00:19:58.393 } 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "method": "nvmf_subsystem_add_ns", 00:19:58.393 "params": { 00:19:58.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.393 "namespace": { 00:19:58.393 "nsid": 1, 00:19:58.393 "bdev_name": "malloc0", 00:19:58.393 "nguid": "8D884B9B86A24C3A86271E56B6806BAA", 00:19:58.393 "uuid": "8d884b9b-86a2-4c3a-8627-1e56b6806baa", 00:19:58.393 "no_auto_visible": false 00:19:58.393 } 00:19:58.393 } 00:19:58.393 }, 00:19:58.393 { 00:19:58.393 "method": "nvmf_subsystem_add_listener", 00:19:58.393 "params": { 00:19:58.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.393 "listen_address": { 00:19:58.393 "trtype": "TCP", 00:19:58.393 "adrfam": "IPv4", 00:19:58.393 "traddr": "10.0.0.2", 00:19:58.393 "trsvcid": "4420" 00:19:58.393 }, 00:19:58.393 "secure_channel": true 00:19:58.393 } 00:19:58.393 } 00:19:58.393 ] 00:19:58.393 } 00:19:58.393 ] 00:19:58.393 }' 00:19:58.393 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:58.654 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:58.654 "subsystems": [ 00:19:58.654 { 00:19:58.654 "subsystem": "keyring", 00:19:58.654 "config": [ 00:19:58.654 { 00:19:58.654 "method": "keyring_file_add_key", 00:19:58.654 "params": { 00:19:58.654 "name": "key0", 00:19:58.654 "path": "/tmp/tmp.qz2gFpl54n" 00:19:58.654 } 00:19:58.654 } 00:19:58.654 ] 00:19:58.654 }, 00:19:58.654 { 00:19:58.654 "subsystem": "iobuf", 00:19:58.654 "config": [ 00:19:58.654 { 00:19:58.654 "method": "iobuf_set_options", 00:19:58.654 "params": { 00:19:58.654 "small_pool_count": 8192, 00:19:58.654 "large_pool_count": 1024, 00:19:58.654 "small_bufsize": 8192, 00:19:58.654 "large_bufsize": 135168, 00:19:58.654 "enable_numa": false 00:19:58.654 } 00:19:58.654 } 00:19:58.654 ] 00:19:58.654 }, 00:19:58.654 { 00:19:58.654 "subsystem": "sock", 00:19:58.654 "config": [ 00:19:58.654 { 00:19:58.654 "method": "sock_set_default_impl", 00:19:58.654 "params": { 00:19:58.654 "impl_name": "posix" 00:19:58.654 } 00:19:58.654 }, 00:19:58.654 { 00:19:58.654 "method": "sock_impl_set_options", 00:19:58.654 "params": { 00:19:58.654 "impl_name": "ssl", 00:19:58.654 "recv_buf_size": 4096, 00:19:58.654 "send_buf_size": 4096, 00:19:58.654 "enable_recv_pipe": true, 00:19:58.654 "enable_quickack": false, 00:19:58.654 "enable_placement_id": 0, 00:19:58.654 "enable_zerocopy_send_server": true, 00:19:58.654 "enable_zerocopy_send_client": false, 00:19:58.654 "zerocopy_threshold": 0, 00:19:58.654 "tls_version": 0, 00:19:58.654 "enable_ktls": false 00:19:58.654 } 00:19:58.654 }, 00:19:58.655 { 00:19:58.655 "method": "sock_impl_set_options", 00:19:58.655 "params": { 00:19:58.655 "impl_name": "posix", 00:19:58.655 "recv_buf_size": 2097152, 00:19:58.655 "send_buf_size": 2097152, 00:19:58.655 "enable_recv_pipe": true, 00:19:58.655 "enable_quickack": false, 00:19:58.655 "enable_placement_id": 0, 00:19:58.655 "enable_zerocopy_send_server": true, 00:19:58.655 "enable_zerocopy_send_client": false, 00:19:58.655 "zerocopy_threshold": 0, 00:19:58.655 "tls_version": 0, 00:19:58.655 "enable_ktls": false 00:19:58.655 } 00:19:58.655 } 00:19:58.655 ] 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "subsystem": "vmd", 00:19:58.655 "config": [] 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "subsystem": "accel", 00:19:58.655 "config": [ 00:19:58.655 { 00:19:58.655 "method": "accel_set_options", 00:19:58.655 "params": { 00:19:58.655 "small_cache_size": 128, 00:19:58.655 "large_cache_size": 16, 00:19:58.655 "task_count": 2048, 00:19:58.655 "sequence_count": 2048, 00:19:58.655 "buf_count": 2048 00:19:58.655 } 00:19:58.655 } 00:19:58.655 ] 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "subsystem": "bdev", 00:19:58.655 "config": [ 00:19:58.655 { 00:19:58.655 "method": "bdev_set_options", 00:19:58.655 "params": { 00:19:58.655 "bdev_io_pool_size": 65535, 00:19:58.655 "bdev_io_cache_size": 256, 00:19:58.655 "bdev_auto_examine": true, 00:19:58.655 "iobuf_small_cache_size": 128, 00:19:58.655 "iobuf_large_cache_size": 16 00:19:58.655 } 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "method": "bdev_raid_set_options", 00:19:58.655 "params": { 00:19:58.655 "process_window_size_kb": 1024, 00:19:58.655 "process_max_bandwidth_mb_sec": 0 00:19:58.655 } 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "method": "bdev_iscsi_set_options", 00:19:58.655 "params": { 00:19:58.655 "timeout_sec": 30 00:19:58.655 } 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "method": "bdev_nvme_set_options", 00:19:58.655 "params": { 00:19:58.655 "action_on_timeout": "none", 00:19:58.655 "timeout_us": 0, 00:19:58.655 "timeout_admin_us": 0, 00:19:58.655 "keep_alive_timeout_ms": 10000, 00:19:58.655 "arbitration_burst": 0, 00:19:58.655 "low_priority_weight": 0, 00:19:58.655 "medium_priority_weight": 0, 00:19:58.655 "high_priority_weight": 0, 00:19:58.655 "nvme_adminq_poll_period_us": 10000, 00:19:58.655 "nvme_ioq_poll_period_us": 0, 00:19:58.655 "io_queue_requests": 512, 00:19:58.655 "delay_cmd_submit": true, 00:19:58.655 "transport_retry_count": 4, 00:19:58.655 "bdev_retry_count": 3, 00:19:58.655 "transport_ack_timeout": 0, 00:19:58.655 "ctrlr_loss_timeout_sec": 0, 00:19:58.655 "reconnect_delay_sec": 0, 00:19:58.655 "fast_io_fail_timeout_sec": 0, 00:19:58.655 "disable_auto_failback": false, 00:19:58.655 "generate_uuids": false, 00:19:58.655 "transport_tos": 0, 00:19:58.655 "nvme_error_stat": false, 00:19:58.655 "rdma_srq_size": 0, 00:19:58.655 "io_path_stat": false, 00:19:58.655 "allow_accel_sequence": false, 00:19:58.655 "rdma_max_cq_size": 0, 00:19:58.655 "rdma_cm_event_timeout_ms": 0, 00:19:58.655 "dhchap_digests": [ 00:19:58.655 "sha256", 00:19:58.655 "sha384", 00:19:58.655 "sha512" 00:19:58.655 ], 00:19:58.655 "dhchap_dhgroups": [ 00:19:58.655 "null", 00:19:58.655 "ffdhe2048", 00:19:58.655 "ffdhe3072", 00:19:58.655 "ffdhe4096", 00:19:58.655 "ffdhe6144", 00:19:58.655 "ffdhe8192" 00:19:58.655 ] 00:19:58.655 } 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "method": "bdev_nvme_attach_controller", 00:19:58.655 "params": { 00:19:58.655 "name": "TLSTEST", 00:19:58.655 "trtype": "TCP", 00:19:58.655 "adrfam": "IPv4", 00:19:58.655 "traddr": "10.0.0.2", 00:19:58.655 "trsvcid": "4420", 00:19:58.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.655 "prchk_reftag": false, 00:19:58.655 "prchk_guard": false, 00:19:58.655 "ctrlr_loss_timeout_sec": 0, 00:19:58.655 "reconnect_delay_sec": 0, 00:19:58.655 "fast_io_fail_timeout_sec": 0, 00:19:58.655 "psk": "key0", 00:19:58.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.655 "hdgst": false, 00:19:58.655 "ddgst": false, 00:19:58.655 "multipath": "multipath" 00:19:58.655 } 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "method": "bdev_nvme_set_hotplug", 00:19:58.655 "params": { 00:19:58.655 "period_us": 100000, 00:19:58.655 "enable": false 00:19:58.655 } 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "method": "bdev_wait_for_examine" 00:19:58.655 } 00:19:58.655 ] 00:19:58.655 }, 00:19:58.655 { 00:19:58.655 "subsystem": "nbd", 00:19:58.655 "config": [] 00:19:58.655 } 00:19:58.655 ] 00:19:58.655 }' 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1746607 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1746607 ']' 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1746607 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1746607 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1746607' 00:19:58.655 killing process with pid 1746607 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1746607 00:19:58.655 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.655 00:19:58.655 Latency(us) 00:19:58.655 [2024-11-06T12:16:40.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.655 [2024-11-06T12:16:40.557Z] =================================================================================================================== 00:19:58.655 [2024-11-06T12:16:40.557Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.655 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1746607 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1746070 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1746070 ']' 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1746070 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1746070 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1746070' 00:19:58.917 killing process with pid 1746070 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1746070 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1746070 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.917 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:58.917 "subsystems": [ 00:19:58.917 { 00:19:58.917 "subsystem": "keyring", 00:19:58.917 "config": [ 00:19:58.917 { 00:19:58.917 "method": "keyring_file_add_key", 00:19:58.917 "params": { 00:19:58.917 "name": "key0", 00:19:58.917 "path": "/tmp/tmp.qz2gFpl54n" 00:19:58.917 } 00:19:58.917 } 00:19:58.917 ] 00:19:58.917 }, 00:19:58.917 { 00:19:58.917 "subsystem": "iobuf", 00:19:58.917 "config": [ 00:19:58.917 { 00:19:58.917 "method": "iobuf_set_options", 00:19:58.917 "params": { 00:19:58.917 "small_pool_count": 8192, 00:19:58.917 "large_pool_count": 1024, 00:19:58.917 "small_bufsize": 8192, 00:19:58.917 "large_bufsize": 135168, 00:19:58.917 "enable_numa": false 00:19:58.917 } 00:19:58.917 } 00:19:58.917 ] 00:19:58.917 }, 00:19:58.917 { 00:19:58.917 "subsystem": "sock", 00:19:58.917 "config": [ 00:19:58.917 { 00:19:58.917 "method": "sock_set_default_impl", 00:19:58.917 "params": { 00:19:58.917 "impl_name": "posix" 00:19:58.917 } 00:19:58.917 }, 00:19:58.917 { 00:19:58.917 "method": "sock_impl_set_options", 00:19:58.917 "params": { 00:19:58.917 "impl_name": "ssl", 00:19:58.917 "recv_buf_size": 4096, 00:19:58.917 "send_buf_size": 4096, 00:19:58.917 "enable_recv_pipe": true, 00:19:58.917 "enable_quickack": false, 00:19:58.917 "enable_placement_id": 0, 00:19:58.917 "enable_zerocopy_send_server": true, 00:19:58.918 "enable_zerocopy_send_client": false, 00:19:58.918 "zerocopy_threshold": 0, 00:19:58.918 "tls_version": 0, 00:19:58.918 "enable_ktls": false 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "sock_impl_set_options", 00:19:58.918 "params": { 00:19:58.918 "impl_name": "posix", 00:19:58.918 "recv_buf_size": 2097152, 00:19:58.918 "send_buf_size": 2097152, 00:19:58.918 "enable_recv_pipe": true, 00:19:58.918 "enable_quickack": false, 00:19:58.918 "enable_placement_id": 0, 00:19:58.918 "enable_zerocopy_send_server": true, 00:19:58.918 "enable_zerocopy_send_client": false, 00:19:58.918 "zerocopy_threshold": 0, 00:19:58.918 "tls_version": 0, 00:19:58.918 "enable_ktls": false 00:19:58.918 } 00:19:58.918 } 00:19:58.918 ] 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "subsystem": "vmd", 00:19:58.918 "config": [] 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "subsystem": "accel", 00:19:58.918 "config": [ 00:19:58.918 { 00:19:58.918 "method": "accel_set_options", 00:19:58.918 "params": { 00:19:58.918 "small_cache_size": 128, 00:19:58.918 "large_cache_size": 16, 00:19:58.918 "task_count": 2048, 00:19:58.918 "sequence_count": 2048, 00:19:58.918 "buf_count": 2048 00:19:58.918 } 00:19:58.918 } 00:19:58.918 ] 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "subsystem": "bdev", 00:19:58.918 "config": [ 00:19:58.918 { 00:19:58.918 "method": "bdev_set_options", 00:19:58.918 "params": { 00:19:58.918 "bdev_io_pool_size": 65535, 00:19:58.918 "bdev_io_cache_size": 256, 00:19:58.918 "bdev_auto_examine": true, 00:19:58.918 "iobuf_small_cache_size": 128, 00:19:58.918 "iobuf_large_cache_size": 16 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "bdev_raid_set_options", 00:19:58.918 "params": { 00:19:58.918 "process_window_size_kb": 1024, 00:19:58.918 "process_max_bandwidth_mb_sec": 0 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "bdev_iscsi_set_options", 00:19:58.918 "params": { 00:19:58.918 "timeout_sec": 30 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "bdev_nvme_set_options", 00:19:58.918 "params": { 00:19:58.918 "action_on_timeout": "none", 00:19:58.918 "timeout_us": 0, 00:19:58.918 "timeout_admin_us": 0, 00:19:58.918 "keep_alive_timeout_ms": 10000, 00:19:58.918 "arbitration_burst": 0, 00:19:58.918 "low_priority_weight": 0, 00:19:58.918 "medium_priority_weight": 0, 00:19:58.918 "high_priority_weight": 0, 00:19:58.918 "nvme_adminq_poll_period_us": 10000, 00:19:58.918 "nvme_ioq_poll_period_us": 0, 00:19:58.918 "io_queue_requests": 0, 00:19:58.918 "delay_cmd_submit": true, 00:19:58.918 "transport_retry_count": 4, 00:19:58.918 "bdev_retry_count": 3, 00:19:58.918 "transport_ack_timeout": 0, 00:19:58.918 "ctrlr_loss_timeout_sec": 0, 00:19:58.918 "reconnect_delay_sec": 0, 00:19:58.918 "fast_io_fail_timeout_sec": 0, 00:19:58.918 "disable_auto_failback": false, 00:19:58.918 "generate_uuids": false, 00:19:58.918 "transport_tos": 0, 00:19:58.918 "nvme_error_stat": false, 00:19:58.918 "rdma_srq_size": 0, 00:19:58.918 "io_path_stat": false, 00:19:58.918 "allow_accel_sequence": false, 00:19:58.918 "rdma_max_cq_size": 0, 00:19:58.918 "rdma_cm_event_timeout_ms": 0, 00:19:58.918 "dhchap_digests": [ 00:19:58.918 "sha256", 00:19:58.918 "sha384", 00:19:58.918 "sha512" 00:19:58.918 ], 00:19:58.918 "dhchap_dhgroups": [ 00:19:58.918 "null", 00:19:58.918 "ffdhe2048", 00:19:58.918 "ffdhe3072", 00:19:58.918 "ffdhe4096", 00:19:58.918 "ffdhe6144", 00:19:58.918 "ffdhe8192" 00:19:58.918 ] 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "bdev_nvme_set_hotplug", 00:19:58.918 "params": { 00:19:58.918 "period_us": 100000, 00:19:58.918 "enable": false 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "bdev_malloc_create", 00:19:58.918 "params": { 00:19:58.918 "name": "malloc0", 00:19:58.918 "num_blocks": 8192, 00:19:58.918 "block_size": 4096, 00:19:58.918 "physical_block_size": 4096, 00:19:58.918 "uuid": "8d884b9b-86a2-4c3a-8627-1e56b6806baa", 00:19:58.918 "optimal_io_boundary": 0, 00:19:58.918 "md_size": 0, 00:19:58.918 "dif_type": 0, 00:19:58.918 "dif_is_head_of_md": false, 00:19:58.918 "dif_pi_format": 0 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "bdev_wait_for_examine" 00:19:58.918 } 00:19:58.918 ] 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "subsystem": "nbd", 00:19:58.918 "config": [] 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "subsystem": "scheduler", 00:19:58.918 "config": [ 00:19:58.918 { 00:19:58.918 "method": "framework_set_scheduler", 00:19:58.918 "params": { 00:19:58.918 "name": "static" 00:19:58.918 } 00:19:58.918 } 00:19:58.918 ] 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "subsystem": "nvmf", 00:19:58.918 "config": [ 00:19:58.918 { 00:19:58.918 "method": "nvmf_set_config", 00:19:58.918 "params": { 00:19:58.918 "discovery_filter": "match_any", 00:19:58.918 "admin_cmd_passthru": { 00:19:58.918 "identify_ctrlr": false 00:19:58.918 }, 00:19:58.918 "dhchap_digests": [ 00:19:58.918 "sha256", 00:19:58.918 "sha384", 00:19:58.918 "sha512" 00:19:58.918 ], 00:19:58.918 "dhchap_dhgroups": [ 00:19:58.918 "null", 00:19:58.918 "ffdhe2048", 00:19:58.918 "ffdhe3072", 00:19:58.918 "ffdhe4096", 00:19:58.918 "ffdhe6144", 00:19:58.918 "ffdhe8192" 00:19:58.918 ] 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "nvmf_set_max_subsystems", 00:19:58.918 "params": { 00:19:58.918 "max_subsystems": 1024 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "nvmf_set_crdt", 00:19:58.918 "params": { 00:19:58.918 "crdt1": 0, 00:19:58.918 "crdt2": 0, 00:19:58.918 "crdt3": 0 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "nvmf_create_transport", 00:19:58.918 "params": { 00:19:58.918 "trtype": "TCP", 00:19:58.918 "max_queue_depth": 128, 00:19:58.918 "max_io_qpairs_per_ctrlr": 127, 00:19:58.918 "in_capsule_data_size": 4096, 00:19:58.918 "max_io_size": 131072, 00:19:58.918 "io_unit_size": 131072, 00:19:58.918 "max_aq_depth": 128, 00:19:58.918 "num_shared_buffers": 511, 00:19:58.918 "buf_cache_size": 4294967295, 00:19:58.918 "dif_insert_or_strip": false, 00:19:58.918 "zcopy": false, 00:19:58.918 "c2h_success": false, 00:19:58.918 "sock_priority": 0, 00:19:58.918 "abort_timeout_sec": 1, 00:19:58.918 "ack_timeout": 0, 00:19:58.918 "data_wr_pool_size": 0 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "nvmf_create_subsystem", 00:19:58.918 "params": { 00:19:58.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.918 "allow_any_host": false, 00:19:58.918 "serial_number": "SPDK00000000000001", 00:19:58.918 "model_number": "SPDK bdev Controller", 00:19:58.918 "max_namespaces": 10, 00:19:58.918 "min_cntlid": 1, 00:19:58.918 "max_cntlid": 65519, 00:19:58.918 "ana_reporting": false 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "nvmf_subsystem_add_host", 00:19:58.918 "params": { 00:19:58.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.918 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.918 "psk": "key0" 00:19:58.918 } 00:19:58.918 }, 00:19:58.918 { 00:19:58.918 "method": "nvmf_subsystem_add_ns", 00:19:58.918 "params": { 00:19:58.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.918 "namespace": { 00:19:58.918 "nsid": 1, 00:19:58.918 "bdev_name": "malloc0", 00:19:58.919 "nguid": "8D884B9B86A24C3A86271E56B6806BAA", 00:19:58.919 "uuid": "8d884b9b-86a2-4c3a-8627-1e56b6806baa", 00:19:58.919 "no_auto_visible": false 00:19:58.919 } 00:19:58.919 } 00:19:58.919 }, 00:19:58.919 { 00:19:58.919 "method": "nvmf_subsystem_add_listener", 00:19:58.919 "params": { 00:19:58.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.919 "listen_address": { 00:19:58.919 "trtype": "TCP", 00:19:58.919 "adrfam": "IPv4", 00:19:58.919 "traddr": "10.0.0.2", 00:19:58.919 "trsvcid": "4420" 00:19:58.919 }, 00:19:58.919 "secure_channel": true 00:19:58.919 } 00:19:58.919 } 00:19:58.919 ] 00:19:58.919 } 00:19:58.919 ] 00:19:58.919 }' 00:19:58.919 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1746837 00:19:58.919 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1746837 00:19:58.919 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:58.919 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1746837 ']' 00:19:58.919 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.919 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:58.919 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.919 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:58.919 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.180 [2024-11-06 13:16:40.865633] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:19:59.180 [2024-11-06 13:16:40.865688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.180 [2024-11-06 13:16:40.957940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.180 [2024-11-06 13:16:40.994017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.180 [2024-11-06 13:16:40.994052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.180 [2024-11-06 13:16:40.994058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.180 [2024-11-06 13:16:40.994063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.180 [2024-11-06 13:16:40.994067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.180 [2024-11-06 13:16:40.994601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.440 [2024-11-06 13:16:41.187440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.440 [2024-11-06 13:16:41.219465] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.440 [2024-11-06 13:16:41.219657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1747000 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1747000 /var/tmp/bdevperf.sock 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1747000 ']' 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.011 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:00.011 "subsystems": [ 00:20:00.011 { 00:20:00.011 "subsystem": "keyring", 00:20:00.011 "config": [ 00:20:00.011 { 00:20:00.011 "method": "keyring_file_add_key", 00:20:00.011 "params": { 00:20:00.011 "name": "key0", 00:20:00.011 "path": "/tmp/tmp.qz2gFpl54n" 00:20:00.011 } 00:20:00.011 } 00:20:00.011 ] 00:20:00.011 }, 00:20:00.011 { 00:20:00.011 "subsystem": "iobuf", 00:20:00.011 "config": [ 00:20:00.011 { 00:20:00.011 "method": "iobuf_set_options", 00:20:00.011 "params": { 00:20:00.011 "small_pool_count": 8192, 00:20:00.011 "large_pool_count": 1024, 00:20:00.011 "small_bufsize": 8192, 00:20:00.011 "large_bufsize": 135168, 00:20:00.011 "enable_numa": false 00:20:00.011 } 00:20:00.011 } 00:20:00.011 ] 00:20:00.011 }, 00:20:00.011 { 00:20:00.011 "subsystem": "sock", 00:20:00.011 "config": [ 00:20:00.011 { 00:20:00.011 "method": "sock_set_default_impl", 00:20:00.011 "params": { 00:20:00.011 "impl_name": "posix" 00:20:00.011 } 00:20:00.011 }, 00:20:00.011 { 00:20:00.011 "method": "sock_impl_set_options", 00:20:00.011 "params": { 00:20:00.011 "impl_name": "ssl", 00:20:00.011 "recv_buf_size": 4096, 00:20:00.011 "send_buf_size": 4096, 00:20:00.011 "enable_recv_pipe": true, 00:20:00.011 "enable_quickack": false, 00:20:00.011 "enable_placement_id": 0, 00:20:00.011 "enable_zerocopy_send_server": true, 00:20:00.011 "enable_zerocopy_send_client": false, 00:20:00.011 "zerocopy_threshold": 0, 00:20:00.011 "tls_version": 0, 00:20:00.011 "enable_ktls": false 00:20:00.011 } 00:20:00.011 }, 00:20:00.011 { 00:20:00.011 "method": "sock_impl_set_options", 00:20:00.011 "params": { 00:20:00.011 "impl_name": "posix", 00:20:00.011 "recv_buf_size": 2097152, 00:20:00.011 "send_buf_size": 2097152, 00:20:00.011 "enable_recv_pipe": true, 00:20:00.011 "enable_quickack": false, 00:20:00.011 "enable_placement_id": 0, 00:20:00.011 "enable_zerocopy_send_server": true, 00:20:00.011 "enable_zerocopy_send_client": false, 00:20:00.011 "zerocopy_threshold": 0, 00:20:00.011 "tls_version": 0, 00:20:00.011 "enable_ktls": false 00:20:00.011 } 00:20:00.011 } 00:20:00.011 ] 00:20:00.011 }, 00:20:00.011 { 00:20:00.011 "subsystem": "vmd", 00:20:00.011 "config": [] 00:20:00.011 }, 00:20:00.011 { 00:20:00.011 "subsystem": "accel", 00:20:00.011 "config": [ 00:20:00.011 { 00:20:00.011 "method": "accel_set_options", 00:20:00.011 "params": { 00:20:00.011 "small_cache_size": 128, 00:20:00.011 "large_cache_size": 16, 00:20:00.011 "task_count": 2048, 00:20:00.011 "sequence_count": 2048, 00:20:00.011 "buf_count": 2048 00:20:00.011 } 00:20:00.011 } 00:20:00.011 ] 00:20:00.011 }, 00:20:00.011 { 00:20:00.011 "subsystem": "bdev", 00:20:00.011 "config": [ 00:20:00.012 { 00:20:00.012 "method": "bdev_set_options", 00:20:00.012 "params": { 00:20:00.012 "bdev_io_pool_size": 65535, 00:20:00.012 "bdev_io_cache_size": 256, 00:20:00.012 "bdev_auto_examine": true, 00:20:00.012 "iobuf_small_cache_size": 128, 00:20:00.012 "iobuf_large_cache_size": 16 00:20:00.012 } 00:20:00.012 }, 00:20:00.012 { 00:20:00.012 "method": "bdev_raid_set_options", 00:20:00.012 "params": { 00:20:00.012 "process_window_size_kb": 1024, 00:20:00.012 "process_max_bandwidth_mb_sec": 0 00:20:00.012 } 00:20:00.012 }, 00:20:00.012 { 00:20:00.012 "method": "bdev_iscsi_set_options", 00:20:00.012 "params": { 00:20:00.012 "timeout_sec": 30 00:20:00.012 } 00:20:00.012 }, 00:20:00.012 { 00:20:00.012 "method": "bdev_nvme_set_options", 00:20:00.012 "params": { 00:20:00.012 "action_on_timeout": "none", 00:20:00.012 "timeout_us": 0, 00:20:00.012 "timeout_admin_us": 0, 00:20:00.012 "keep_alive_timeout_ms": 10000, 00:20:00.012 "arbitration_burst": 0, 00:20:00.012 "low_priority_weight": 0, 00:20:00.012 "medium_priority_weight": 0, 00:20:00.012 "high_priority_weight": 0, 00:20:00.012 "nvme_adminq_poll_period_us": 10000, 00:20:00.012 "nvme_ioq_poll_period_us": 0, 00:20:00.012 "io_queue_requests": 512, 00:20:00.012 "delay_cmd_submit": true, 00:20:00.012 "transport_retry_count": 4, 00:20:00.012 "bdev_retry_count": 3, 00:20:00.012 "transport_ack_timeout": 0, 00:20:00.012 "ctrlr_loss_timeout_sec": 0, 00:20:00.012 "reconnect_delay_sec": 0, 00:20:00.012 "fast_io_fail_timeout_sec": 0, 00:20:00.012 "disable_auto_failback": false, 00:20:00.012 "generate_uuids": false, 00:20:00.012 "transport_tos": 0, 00:20:00.012 "nvme_error_stat": false, 00:20:00.012 "rdma_srq_size": 0, 00:20:00.012 "io_path_stat": false, 00:20:00.012 "allow_accel_sequence": false, 00:20:00.012 "rdma_max_cq_size": 0, 00:20:00.012 "rdma_cm_event_timeout_ms": 0, 00:20:00.012 "dhchap_digests": [ 00:20:00.012 "sha256", 00:20:00.012 "sha384", 00:20:00.012 "sha512" 00:20:00.012 ], 00:20:00.012 "dhchap_dhgroups": [ 00:20:00.012 "null", 00:20:00.012 "ffdhe2048", 00:20:00.012 "ffdhe3072", 00:20:00.012 "ffdhe4096", 00:20:00.012 "ffdhe6144", 00:20:00.012 "ffdhe8192" 00:20:00.012 ] 00:20:00.012 } 00:20:00.012 }, 00:20:00.012 { 00:20:00.012 "method": "bdev_nvme_attach_controller", 00:20:00.012 "params": { 00:20:00.012 "name": "TLSTEST", 00:20:00.012 "trtype": "TCP", 00:20:00.012 "adrfam": "IPv4", 00:20:00.012 "traddr": "10.0.0.2", 00:20:00.012 "trsvcid": "4420", 00:20:00.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.012 "prchk_reftag": false, 00:20:00.012 "prchk_guard": false, 00:20:00.012 "ctrlr_loss_timeout_sec": 0, 00:20:00.012 "reconnect_delay_sec": 0, 00:20:00.012 "fast_io_fail_timeout_sec": 0, 00:20:00.012 "psk": "key0", 00:20:00.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.012 "hdgst": false, 00:20:00.012 "ddgst": false, 00:20:00.012 "multipath": "multipath" 00:20:00.012 } 00:20:00.012 }, 00:20:00.012 { 00:20:00.012 "method": "bdev_nvme_set_hotplug", 00:20:00.012 "params": { 00:20:00.012 "period_us": 100000, 00:20:00.012 "enable": false 00:20:00.012 } 00:20:00.012 }, 00:20:00.012 { 00:20:00.012 "method": "bdev_wait_for_examine" 00:20:00.012 } 00:20:00.012 ] 00:20:00.012 }, 00:20:00.012 { 00:20:00.012 "subsystem": "nbd", 00:20:00.012 "config": [] 00:20:00.012 } 00:20:00.012 ] 00:20:00.012 }' 00:20:00.012 [2024-11-06 13:16:41.758644] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:00.012 [2024-11-06 13:16:41.758699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747000 ] 00:20:00.012 [2024-11-06 13:16:41.847991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.012 [2024-11-06 13:16:41.883179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.272 [2024-11-06 13:16:42.022614] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.880 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.880 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:00.880 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:00.880 Running I/O for 10 seconds... 00:20:02.849 4922.00 IOPS, 19.23 MiB/s [2024-11-06T12:16:45.693Z] 4861.00 IOPS, 18.99 MiB/s [2024-11-06T12:16:47.079Z] 5011.00 IOPS, 19.57 MiB/s [2024-11-06T12:16:47.652Z] 5144.00 IOPS, 20.09 MiB/s [2024-11-06T12:16:49.034Z] 5098.40 IOPS, 19.92 MiB/s [2024-11-06T12:16:49.974Z] 5130.67 IOPS, 20.04 MiB/s [2024-11-06T12:16:50.916Z] 5193.00 IOPS, 20.29 MiB/s [2024-11-06T12:16:51.856Z] 5270.88 IOPS, 20.59 MiB/s [2024-11-06T12:16:52.797Z] 5238.11 IOPS, 20.46 MiB/s [2024-11-06T12:16:52.797Z] 5271.50 IOPS, 20.59 MiB/s 00:20:10.895 Latency(us) 00:20:10.895 [2024-11-06T12:16:52.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.895 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:10.895 Verification LBA range: start 0x0 length 0x2000 00:20:10.895 TLSTESTn1 : 10.01 5277.37 20.61 0.00 0.00 24218.11 5434.03 27415.89 00:20:10.895 [2024-11-06T12:16:52.797Z] =================================================================================================================== 00:20:10.895 [2024-11-06T12:16:52.797Z] Total : 5277.37 20.61 0.00 0.00 24218.11 5434.03 27415.89 00:20:10.895 { 00:20:10.895 "results": [ 00:20:10.895 { 00:20:10.895 "job": "TLSTESTn1", 00:20:10.895 "core_mask": "0x4", 00:20:10.895 "workload": "verify", 00:20:10.895 "status": "finished", 00:20:10.895 "verify_range": { 00:20:10.895 "start": 0, 00:20:10.895 "length": 8192 00:20:10.895 }, 00:20:10.895 "queue_depth": 128, 00:20:10.895 "io_size": 4096, 00:20:10.895 "runtime": 10.01295, 00:20:10.895 "iops": 5277.3658112744, 00:20:10.895 "mibps": 20.614710200290624, 00:20:10.895 "io_failed": 0, 00:20:10.895 "io_timeout": 0, 00:20:10.895 "avg_latency_us": 24218.10939681819, 00:20:10.895 "min_latency_us": 5434.026666666667, 00:20:10.895 "max_latency_us": 27415.893333333333 00:20:10.895 } 00:20:10.895 ], 00:20:10.895 "core_count": 1 00:20:10.895 } 00:20:10.895 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.895 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1747000 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1747000 ']' 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1747000 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1747000 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1747000' 00:20:10.896 killing process with pid 1747000 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1747000 00:20:10.896 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.896 00:20:10.896 Latency(us) 00:20:10.896 [2024-11-06T12:16:52.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.896 [2024-11-06T12:16:52.798Z] =================================================================================================================== 00:20:10.896 [2024-11-06T12:16:52.798Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.896 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1747000 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1746837 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1746837 ']' 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1746837 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1746837 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1746837' 00:20:11.156 killing process with pid 1746837 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1746837 00:20:11.156 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1746837 00:20:11.156 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:11.156 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.156 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.156 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.156 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1749219 00:20:11.156 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1749219 00:20:11.156 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:11.157 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1749219 ']' 00:20:11.157 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.157 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:11.157 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.157 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:11.157 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.416 [2024-11-06 13:16:53.103104] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:11.416 [2024-11-06 13:16:53.103159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.416 [2024-11-06 13:16:53.199577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.416 [2024-11-06 13:16:53.248486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.417 [2024-11-06 13:16:53.248543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.417 [2024-11-06 13:16:53.248552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.417 [2024-11-06 13:16:53.248560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.417 [2024-11-06 13:16:53.248566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.417 [2024-11-06 13:16:53.249356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.355 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:12.355 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:12.355 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.355 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:12.355 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.355 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.355 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.qz2gFpl54n 00:20:12.355 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qz2gFpl54n 00:20:12.355 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.355 [2024-11-06 13:16:54.123787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.355 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.615 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.875 [2024-11-06 13:16:54.516782] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.875 [2024-11-06 13:16:54.517074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.875 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.875 malloc0 00:20:12.875 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:13.136 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qz2gFpl54n 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1749719 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1749719 /var/tmp/bdevperf.sock 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1749719 ']' 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:13.396 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.656 [2024-11-06 13:16:55.336027] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:13.656 [2024-11-06 13:16:55.336101] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749719 ] 00:20:13.656 [2024-11-06 13:16:55.423571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.656 [2024-11-06 13:16:55.457311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.596 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:14.596 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:14.596 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qz2gFpl54n 00:20:14.596 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:14.596 [2024-11-06 13:16:56.467251] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.856 nvme0n1 00:20:14.856 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.856 Running I/O for 1 seconds... 00:20:15.796 4977.00 IOPS, 19.44 MiB/s 00:20:15.796 Latency(us) 00:20:15.796 [2024-11-06T12:16:57.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.796 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.796 Verification LBA range: start 0x0 length 0x2000 00:20:15.796 nvme0n1 : 1.02 5013.65 19.58 0.00 0.00 25361.28 4532.91 29054.29 00:20:15.796 [2024-11-06T12:16:57.698Z] =================================================================================================================== 00:20:15.796 [2024-11-06T12:16:57.698Z] Total : 5013.65 19.58 0.00 0.00 25361.28 4532.91 29054.29 00:20:15.796 { 00:20:15.796 "results": [ 00:20:15.796 { 00:20:15.796 "job": "nvme0n1", 00:20:15.796 "core_mask": "0x2", 00:20:15.796 "workload": "verify", 00:20:15.796 "status": "finished", 00:20:15.796 "verify_range": { 00:20:15.796 "start": 0, 00:20:15.796 "length": 8192 00:20:15.796 }, 00:20:15.796 "queue_depth": 128, 00:20:15.796 "io_size": 4096, 00:20:15.796 "runtime": 1.018419, 00:20:15.796 "iops": 5013.653515890807, 00:20:15.796 "mibps": 19.584584046448466, 00:20:15.796 "io_failed": 0, 00:20:15.796 "io_timeout": 0, 00:20:15.796 "avg_latency_us": 25361.280584932756, 00:20:15.796 "min_latency_us": 4532.906666666667, 00:20:15.796 "max_latency_us": 29054.293333333335 00:20:15.796 } 00:20:15.796 ], 00:20:15.796 "core_count": 1 00:20:15.796 } 00:20:15.796 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1749719 00:20:15.796 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1749719 ']' 00:20:15.796 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1749719 00:20:15.796 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:15.796 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1749719 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1749719' 00:20:16.057 killing process with pid 1749719 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1749719 00:20:16.057 Received shutdown signal, test time was about 1.000000 seconds 00:20:16.057 00:20:16.057 Latency(us) 00:20:16.057 [2024-11-06T12:16:57.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.057 [2024-11-06T12:16:57.959Z] =================================================================================================================== 00:20:16.057 [2024-11-06T12:16:57.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1749719 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1749219 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1749219 ']' 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1749219 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1749219 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1749219' 00:20:16.057 killing process with pid 1749219 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1749219 00:20:16.057 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1749219 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1750154 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1750154 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1750154 ']' 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:16.318 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.318 [2024-11-06 13:16:58.115687] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:16.318 [2024-11-06 13:16:58.115743] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.318 [2024-11-06 13:16:58.213067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.579 [2024-11-06 13:16:58.263010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.579 [2024-11-06 13:16:58.263065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.579 [2024-11-06 13:16:58.263073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.579 [2024-11-06 13:16:58.263080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.579 [2024-11-06 13:16:58.263088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.579 [2024-11-06 13:16:58.263888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.150 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:17.150 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:17.150 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.150 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:17.150 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.150 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.150 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:17.150 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.150 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.150 [2024-11-06 13:16:58.989709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.150 malloc0 00:20:17.150 [2024-11-06 13:16:59.019817] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.150 [2024-11-06 13:16:59.020138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.150 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.409 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1750423 00:20:17.409 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1750423 /var/tmp/bdevperf.sock 00:20:17.410 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:17.410 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1750423 ']' 00:20:17.410 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.410 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:17.410 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.410 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:17.410 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.410 [2024-11-06 13:16:59.103408] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:17.410 [2024-11-06 13:16:59.103468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750423 ] 00:20:17.410 [2024-11-06 13:16:59.192467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.410 [2024-11-06 13:16:59.226445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.348 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:18.348 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:18.348 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qz2gFpl54n 00:20:18.348 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:18.348 [2024-11-06 13:17:00.216284] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.608 nvme0n1 00:20:18.608 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.608 Running I/O for 1 seconds... 00:20:19.549 4144.00 IOPS, 16.19 MiB/s 00:20:19.549 Latency(us) 00:20:19.549 [2024-11-06T12:17:01.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.549 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:19.549 Verification LBA range: start 0x0 length 0x2000 00:20:19.549 nvme0n1 : 1.01 4222.19 16.49 0.00 0.00 30136.13 4505.60 80827.73 00:20:19.549 [2024-11-06T12:17:01.451Z] =================================================================================================================== 00:20:19.549 [2024-11-06T12:17:01.451Z] Total : 4222.19 16.49 0.00 0.00 30136.13 4505.60 80827.73 00:20:19.549 { 00:20:19.549 "results": [ 00:20:19.549 { 00:20:19.549 "job": "nvme0n1", 00:20:19.549 "core_mask": "0x2", 00:20:19.549 "workload": "verify", 00:20:19.549 "status": "finished", 00:20:19.549 "verify_range": { 00:20:19.549 "start": 0, 00:20:19.549 "length": 8192 00:20:19.549 }, 00:20:19.549 "queue_depth": 128, 00:20:19.549 "io_size": 4096, 00:20:19.549 "runtime": 1.011798, 00:20:19.549 "iops": 4222.186641997711, 00:20:19.549 "mibps": 16.49291657030356, 00:20:19.549 "io_failed": 0, 00:20:19.549 "io_timeout": 0, 00:20:19.549 "avg_latency_us": 30136.134631710363, 00:20:19.549 "min_latency_us": 4505.6, 00:20:19.549 "max_latency_us": 80827.73333333334 00:20:19.549 } 00:20:19.549 ], 00:20:19.549 "core_count": 1 00:20:19.549 } 00:20:19.549 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:19.549 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.549 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.810 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.810 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:19.810 "subsystems": [ 00:20:19.810 { 00:20:19.810 "subsystem": "keyring", 00:20:19.810 "config": [ 00:20:19.810 { 00:20:19.810 "method": "keyring_file_add_key", 00:20:19.810 "params": { 00:20:19.810 "name": "key0", 00:20:19.810 "path": "/tmp/tmp.qz2gFpl54n" 00:20:19.810 } 00:20:19.810 } 00:20:19.810 ] 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "subsystem": "iobuf", 00:20:19.810 "config": [ 00:20:19.810 { 00:20:19.810 "method": "iobuf_set_options", 00:20:19.810 "params": { 00:20:19.810 "small_pool_count": 8192, 00:20:19.810 "large_pool_count": 1024, 00:20:19.810 "small_bufsize": 8192, 00:20:19.810 "large_bufsize": 135168, 00:20:19.810 "enable_numa": false 00:20:19.810 } 00:20:19.810 } 00:20:19.810 ] 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "subsystem": "sock", 00:20:19.810 "config": [ 00:20:19.810 { 00:20:19.810 "method": "sock_set_default_impl", 00:20:19.810 "params": { 00:20:19.810 "impl_name": "posix" 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "sock_impl_set_options", 00:20:19.810 "params": { 00:20:19.810 "impl_name": "ssl", 00:20:19.810 "recv_buf_size": 4096, 00:20:19.810 "send_buf_size": 4096, 00:20:19.810 "enable_recv_pipe": true, 00:20:19.810 "enable_quickack": false, 00:20:19.810 "enable_placement_id": 0, 00:20:19.810 "enable_zerocopy_send_server": true, 00:20:19.810 "enable_zerocopy_send_client": false, 00:20:19.810 "zerocopy_threshold": 0, 00:20:19.810 "tls_version": 0, 00:20:19.810 "enable_ktls": false 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "sock_impl_set_options", 00:20:19.810 "params": { 00:20:19.810 "impl_name": "posix", 00:20:19.810 "recv_buf_size": 2097152, 00:20:19.810 "send_buf_size": 2097152, 00:20:19.810 "enable_recv_pipe": true, 00:20:19.810 "enable_quickack": false, 00:20:19.810 "enable_placement_id": 0, 00:20:19.810 "enable_zerocopy_send_server": true, 00:20:19.810 "enable_zerocopy_send_client": false, 00:20:19.810 "zerocopy_threshold": 0, 00:20:19.810 "tls_version": 0, 00:20:19.810 "enable_ktls": false 00:20:19.810 } 00:20:19.810 } 00:20:19.810 ] 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "subsystem": "vmd", 00:20:19.810 "config": [] 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "subsystem": "accel", 00:20:19.810 "config": [ 00:20:19.810 { 00:20:19.810 "method": "accel_set_options", 00:20:19.810 "params": { 00:20:19.810 "small_cache_size": 128, 00:20:19.810 "large_cache_size": 16, 00:20:19.810 "task_count": 2048, 00:20:19.810 "sequence_count": 2048, 00:20:19.810 "buf_count": 2048 00:20:19.810 } 00:20:19.810 } 00:20:19.810 ] 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "subsystem": "bdev", 00:20:19.810 "config": [ 00:20:19.810 { 00:20:19.810 "method": "bdev_set_options", 00:20:19.810 "params": { 00:20:19.810 "bdev_io_pool_size": 65535, 00:20:19.810 "bdev_io_cache_size": 256, 00:20:19.810 "bdev_auto_examine": true, 00:20:19.810 "iobuf_small_cache_size": 128, 00:20:19.810 "iobuf_large_cache_size": 16 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "bdev_raid_set_options", 00:20:19.810 "params": { 00:20:19.810 "process_window_size_kb": 1024, 00:20:19.810 "process_max_bandwidth_mb_sec": 0 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "bdev_iscsi_set_options", 00:20:19.810 "params": { 00:20:19.810 "timeout_sec": 30 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "bdev_nvme_set_options", 00:20:19.810 "params": { 00:20:19.810 "action_on_timeout": "none", 00:20:19.810 "timeout_us": 0, 00:20:19.810 "timeout_admin_us": 0, 00:20:19.810 "keep_alive_timeout_ms": 10000, 00:20:19.810 "arbitration_burst": 0, 00:20:19.810 "low_priority_weight": 0, 00:20:19.810 "medium_priority_weight": 0, 00:20:19.810 "high_priority_weight": 0, 00:20:19.810 "nvme_adminq_poll_period_us": 10000, 00:20:19.810 "nvme_ioq_poll_period_us": 0, 00:20:19.810 "io_queue_requests": 0, 00:20:19.810 "delay_cmd_submit": true, 00:20:19.810 "transport_retry_count": 4, 00:20:19.810 "bdev_retry_count": 3, 00:20:19.810 "transport_ack_timeout": 0, 00:20:19.810 "ctrlr_loss_timeout_sec": 0, 00:20:19.810 "reconnect_delay_sec": 0, 00:20:19.810 "fast_io_fail_timeout_sec": 0, 00:20:19.810 "disable_auto_failback": false, 00:20:19.810 "generate_uuids": false, 00:20:19.810 "transport_tos": 0, 00:20:19.810 "nvme_error_stat": false, 00:20:19.810 "rdma_srq_size": 0, 00:20:19.810 "io_path_stat": false, 00:20:19.810 "allow_accel_sequence": false, 00:20:19.810 "rdma_max_cq_size": 0, 00:20:19.810 "rdma_cm_event_timeout_ms": 0, 00:20:19.810 "dhchap_digests": [ 00:20:19.810 "sha256", 00:20:19.810 "sha384", 00:20:19.810 "sha512" 00:20:19.810 ], 00:20:19.810 "dhchap_dhgroups": [ 00:20:19.810 "null", 00:20:19.810 "ffdhe2048", 00:20:19.810 "ffdhe3072", 00:20:19.810 "ffdhe4096", 00:20:19.810 "ffdhe6144", 00:20:19.810 "ffdhe8192" 00:20:19.810 ] 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "bdev_nvme_set_hotplug", 00:20:19.810 "params": { 00:20:19.810 "period_us": 100000, 00:20:19.810 "enable": false 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "bdev_malloc_create", 00:20:19.810 "params": { 00:20:19.810 "name": "malloc0", 00:20:19.810 "num_blocks": 8192, 00:20:19.810 "block_size": 4096, 00:20:19.810 "physical_block_size": 4096, 00:20:19.810 "uuid": "a1a31a97-0fff-47b4-a73b-4cbabc8fa69f", 00:20:19.810 "optimal_io_boundary": 0, 00:20:19.810 "md_size": 0, 00:20:19.810 "dif_type": 0, 00:20:19.810 "dif_is_head_of_md": false, 00:20:19.810 "dif_pi_format": 0 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "bdev_wait_for_examine" 00:20:19.810 } 00:20:19.810 ] 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "subsystem": "nbd", 00:20:19.810 "config": [] 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "subsystem": "scheduler", 00:20:19.810 "config": [ 00:20:19.810 { 00:20:19.810 "method": "framework_set_scheduler", 00:20:19.810 "params": { 00:20:19.810 "name": "static" 00:20:19.810 } 00:20:19.810 } 00:20:19.810 ] 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "subsystem": "nvmf", 00:20:19.810 "config": [ 00:20:19.810 { 00:20:19.810 "method": "nvmf_set_config", 00:20:19.810 "params": { 00:20:19.810 "discovery_filter": "match_any", 00:20:19.810 "admin_cmd_passthru": { 00:20:19.810 "identify_ctrlr": false 00:20:19.810 }, 00:20:19.810 "dhchap_digests": [ 00:20:19.810 "sha256", 00:20:19.810 "sha384", 00:20:19.810 "sha512" 00:20:19.810 ], 00:20:19.810 "dhchap_dhgroups": [ 00:20:19.810 "null", 00:20:19.810 "ffdhe2048", 00:20:19.810 "ffdhe3072", 00:20:19.810 "ffdhe4096", 00:20:19.810 "ffdhe6144", 00:20:19.810 "ffdhe8192" 00:20:19.810 ] 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "nvmf_set_max_subsystems", 00:20:19.810 "params": { 00:20:19.810 "max_subsystems": 1024 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "nvmf_set_crdt", 00:20:19.810 "params": { 00:20:19.810 "crdt1": 0, 00:20:19.810 "crdt2": 0, 00:20:19.810 "crdt3": 0 00:20:19.810 } 00:20:19.810 }, 00:20:19.810 { 00:20:19.810 "method": "nvmf_create_transport", 00:20:19.810 "params": { 00:20:19.810 "trtype": "TCP", 00:20:19.810 "max_queue_depth": 128, 00:20:19.810 "max_io_qpairs_per_ctrlr": 127, 00:20:19.810 "in_capsule_data_size": 4096, 00:20:19.810 "max_io_size": 131072, 00:20:19.810 "io_unit_size": 131072, 00:20:19.810 "max_aq_depth": 128, 00:20:19.810 "num_shared_buffers": 511, 00:20:19.811 "buf_cache_size": 4294967295, 00:20:19.811 "dif_insert_or_strip": false, 00:20:19.811 "zcopy": false, 00:20:19.811 "c2h_success": false, 00:20:19.811 "sock_priority": 0, 00:20:19.811 "abort_timeout_sec": 1, 00:20:19.811 "ack_timeout": 0, 00:20:19.811 "data_wr_pool_size": 0 00:20:19.811 } 00:20:19.811 }, 00:20:19.811 { 00:20:19.811 "method": "nvmf_create_subsystem", 00:20:19.811 "params": { 00:20:19.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.811 "allow_any_host": false, 00:20:19.811 "serial_number": "00000000000000000000", 00:20:19.811 "model_number": "SPDK bdev Controller", 00:20:19.811 "max_namespaces": 32, 00:20:19.811 "min_cntlid": 1, 00:20:19.811 "max_cntlid": 65519, 00:20:19.811 "ana_reporting": false 00:20:19.811 } 00:20:19.811 }, 00:20:19.811 { 00:20:19.811 "method": "nvmf_subsystem_add_host", 00:20:19.811 "params": { 00:20:19.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.811 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.811 "psk": "key0" 00:20:19.811 } 00:20:19.811 }, 00:20:19.811 { 00:20:19.811 "method": "nvmf_subsystem_add_ns", 00:20:19.811 "params": { 00:20:19.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.811 "namespace": { 00:20:19.811 "nsid": 1, 00:20:19.811 "bdev_name": "malloc0", 00:20:19.811 "nguid": "A1A31A970FFF47B4A73B4CBABC8FA69F", 00:20:19.811 "uuid": "a1a31a97-0fff-47b4-a73b-4cbabc8fa69f", 00:20:19.811 "no_auto_visible": false 00:20:19.811 } 00:20:19.811 } 00:20:19.811 }, 00:20:19.811 { 00:20:19.811 "method": "nvmf_subsystem_add_listener", 00:20:19.811 "params": { 00:20:19.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.811 "listen_address": { 00:20:19.811 "trtype": "TCP", 00:20:19.811 "adrfam": "IPv4", 00:20:19.811 "traddr": "10.0.0.2", 00:20:19.811 "trsvcid": "4420" 00:20:19.811 }, 00:20:19.811 "secure_channel": false, 00:20:19.811 "sock_impl": "ssl" 00:20:19.811 } 00:20:19.811 } 00:20:19.811 ] 00:20:19.811 } 00:20:19.811 ] 00:20:19.811 }' 00:20:19.811 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:20.071 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:20.071 "subsystems": [ 00:20:20.071 { 00:20:20.071 "subsystem": "keyring", 00:20:20.071 "config": [ 00:20:20.071 { 00:20:20.071 "method": "keyring_file_add_key", 00:20:20.071 "params": { 00:20:20.071 "name": "key0", 00:20:20.071 "path": "/tmp/tmp.qz2gFpl54n" 00:20:20.071 } 00:20:20.071 } 00:20:20.071 ] 00:20:20.071 }, 00:20:20.071 { 00:20:20.071 "subsystem": "iobuf", 00:20:20.071 "config": [ 00:20:20.071 { 00:20:20.071 "method": "iobuf_set_options", 00:20:20.071 "params": { 00:20:20.071 "small_pool_count": 8192, 00:20:20.071 "large_pool_count": 1024, 00:20:20.071 "small_bufsize": 8192, 00:20:20.071 "large_bufsize": 135168, 00:20:20.072 "enable_numa": false 00:20:20.072 } 00:20:20.072 } 00:20:20.072 ] 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "subsystem": "sock", 00:20:20.072 "config": [ 00:20:20.072 { 00:20:20.072 "method": "sock_set_default_impl", 00:20:20.072 "params": { 00:20:20.072 "impl_name": "posix" 00:20:20.072 } 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "method": "sock_impl_set_options", 00:20:20.072 "params": { 00:20:20.072 "impl_name": "ssl", 00:20:20.072 "recv_buf_size": 4096, 00:20:20.072 "send_buf_size": 4096, 00:20:20.072 "enable_recv_pipe": true, 00:20:20.072 "enable_quickack": false, 00:20:20.072 "enable_placement_id": 0, 00:20:20.072 "enable_zerocopy_send_server": true, 00:20:20.072 "enable_zerocopy_send_client": false, 00:20:20.072 "zerocopy_threshold": 0, 00:20:20.072 "tls_version": 0, 00:20:20.072 "enable_ktls": false 00:20:20.072 } 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "method": "sock_impl_set_options", 00:20:20.072 "params": { 00:20:20.072 "impl_name": "posix", 00:20:20.072 "recv_buf_size": 2097152, 00:20:20.072 "send_buf_size": 2097152, 00:20:20.072 "enable_recv_pipe": true, 00:20:20.072 "enable_quickack": false, 00:20:20.072 "enable_placement_id": 0, 00:20:20.072 "enable_zerocopy_send_server": true, 00:20:20.072 "enable_zerocopy_send_client": false, 00:20:20.072 "zerocopy_threshold": 0, 00:20:20.072 "tls_version": 0, 00:20:20.072 "enable_ktls": false 00:20:20.072 } 00:20:20.072 } 00:20:20.072 ] 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "subsystem": "vmd", 00:20:20.072 "config": [] 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "subsystem": "accel", 00:20:20.072 "config": [ 00:20:20.072 { 00:20:20.072 "method": "accel_set_options", 00:20:20.072 "params": { 00:20:20.072 "small_cache_size": 128, 00:20:20.072 "large_cache_size": 16, 00:20:20.072 "task_count": 2048, 00:20:20.072 "sequence_count": 2048, 00:20:20.072 "buf_count": 2048 00:20:20.072 } 00:20:20.072 } 00:20:20.072 ] 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "subsystem": "bdev", 00:20:20.072 "config": [ 00:20:20.072 { 00:20:20.072 "method": "bdev_set_options", 00:20:20.072 "params": { 00:20:20.072 "bdev_io_pool_size": 65535, 00:20:20.072 "bdev_io_cache_size": 256, 00:20:20.072 "bdev_auto_examine": true, 00:20:20.072 "iobuf_small_cache_size": 128, 00:20:20.072 "iobuf_large_cache_size": 16 00:20:20.072 } 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "method": "bdev_raid_set_options", 00:20:20.072 "params": { 00:20:20.072 "process_window_size_kb": 1024, 00:20:20.072 "process_max_bandwidth_mb_sec": 0 00:20:20.072 } 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "method": "bdev_iscsi_set_options", 00:20:20.072 "params": { 00:20:20.072 "timeout_sec": 30 00:20:20.072 } 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "method": "bdev_nvme_set_options", 00:20:20.072 "params": { 00:20:20.072 "action_on_timeout": "none", 00:20:20.072 "timeout_us": 0, 00:20:20.072 "timeout_admin_us": 0, 00:20:20.072 "keep_alive_timeout_ms": 10000, 00:20:20.072 "arbitration_burst": 0, 00:20:20.072 "low_priority_weight": 0, 00:20:20.072 "medium_priority_weight": 0, 00:20:20.072 "high_priority_weight": 0, 00:20:20.072 "nvme_adminq_poll_period_us": 10000, 00:20:20.072 "nvme_ioq_poll_period_us": 0, 00:20:20.072 "io_queue_requests": 512, 00:20:20.072 "delay_cmd_submit": true, 00:20:20.072 "transport_retry_count": 4, 00:20:20.072 "bdev_retry_count": 3, 00:20:20.072 "transport_ack_timeout": 0, 00:20:20.072 "ctrlr_loss_timeout_sec": 0, 00:20:20.072 "reconnect_delay_sec": 0, 00:20:20.072 "fast_io_fail_timeout_sec": 0, 00:20:20.072 "disable_auto_failback": false, 00:20:20.072 "generate_uuids": false, 00:20:20.072 "transport_tos": 0, 00:20:20.072 "nvme_error_stat": false, 00:20:20.072 "rdma_srq_size": 0, 00:20:20.072 "io_path_stat": false, 00:20:20.072 "allow_accel_sequence": false, 00:20:20.072 "rdma_max_cq_size": 0, 00:20:20.072 "rdma_cm_event_timeout_ms": 0, 00:20:20.072 "dhchap_digests": [ 00:20:20.072 "sha256", 00:20:20.072 "sha384", 00:20:20.072 "sha512" 00:20:20.072 ], 00:20:20.072 "dhchap_dhgroups": [ 00:20:20.072 "null", 00:20:20.072 "ffdhe2048", 00:20:20.072 "ffdhe3072", 00:20:20.072 "ffdhe4096", 00:20:20.072 "ffdhe6144", 00:20:20.072 "ffdhe8192" 00:20:20.072 ] 00:20:20.072 } 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "method": "bdev_nvme_attach_controller", 00:20:20.072 "params": { 00:20:20.072 "name": "nvme0", 00:20:20.072 "trtype": "TCP", 00:20:20.072 "adrfam": "IPv4", 00:20:20.072 "traddr": "10.0.0.2", 00:20:20.072 "trsvcid": "4420", 00:20:20.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.072 "prchk_reftag": false, 00:20:20.072 "prchk_guard": false, 00:20:20.072 "ctrlr_loss_timeout_sec": 0, 00:20:20.072 "reconnect_delay_sec": 0, 00:20:20.072 "fast_io_fail_timeout_sec": 0, 00:20:20.072 "psk": "key0", 00:20:20.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.072 "hdgst": false, 00:20:20.072 "ddgst": false, 00:20:20.072 "multipath": "multipath" 00:20:20.072 } 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "method": "bdev_nvme_set_hotplug", 00:20:20.072 "params": { 00:20:20.072 "period_us": 100000, 00:20:20.072 "enable": false 00:20:20.072 } 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "method": "bdev_enable_histogram", 00:20:20.072 "params": { 00:20:20.072 "name": "nvme0n1", 00:20:20.072 "enable": true 00:20:20.072 } 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "method": "bdev_wait_for_examine" 00:20:20.072 } 00:20:20.072 ] 00:20:20.072 }, 00:20:20.072 { 00:20:20.072 "subsystem": "nbd", 00:20:20.072 "config": [] 00:20:20.072 } 00:20:20.072 ] 00:20:20.072 }' 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1750423 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1750423 ']' 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1750423 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1750423 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1750423' 00:20:20.072 killing process with pid 1750423 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1750423 00:20:20.072 Received shutdown signal, test time was about 1.000000 seconds 00:20:20.072 00:20:20.072 Latency(us) 00:20:20.072 [2024-11-06T12:17:01.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.072 [2024-11-06T12:17:01.974Z] =================================================================================================================== 00:20:20.072 [2024-11-06T12:17:01.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1750423 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1750154 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1750154 ']' 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1750154 00:20:20.072 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:20.333 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.333 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1750154 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1750154' 00:20:20.333 killing process with pid 1750154 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1750154 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1750154 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.333 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:20.333 "subsystems": [ 00:20:20.333 { 00:20:20.333 "subsystem": "keyring", 00:20:20.333 "config": [ 00:20:20.333 { 00:20:20.333 "method": "keyring_file_add_key", 00:20:20.333 "params": { 00:20:20.333 "name": "key0", 00:20:20.333 "path": "/tmp/tmp.qz2gFpl54n" 00:20:20.333 } 00:20:20.333 } 00:20:20.333 ] 00:20:20.333 }, 00:20:20.333 { 00:20:20.333 "subsystem": "iobuf", 00:20:20.333 "config": [ 00:20:20.333 { 00:20:20.333 "method": "iobuf_set_options", 00:20:20.333 "params": { 00:20:20.333 "small_pool_count": 8192, 00:20:20.333 "large_pool_count": 1024, 00:20:20.333 "small_bufsize": 8192, 00:20:20.333 "large_bufsize": 135168, 00:20:20.333 "enable_numa": false 00:20:20.333 } 00:20:20.333 } 00:20:20.333 ] 00:20:20.333 }, 00:20:20.333 { 00:20:20.333 "subsystem": "sock", 00:20:20.333 "config": [ 00:20:20.333 { 00:20:20.333 "method": "sock_set_default_impl", 00:20:20.333 "params": { 00:20:20.333 "impl_name": "posix" 00:20:20.333 } 00:20:20.333 }, 00:20:20.333 { 00:20:20.333 "method": "sock_impl_set_options", 00:20:20.333 "params": { 00:20:20.333 "impl_name": "ssl", 00:20:20.333 "recv_buf_size": 4096, 00:20:20.333 "send_buf_size": 4096, 00:20:20.333 "enable_recv_pipe": true, 00:20:20.333 "enable_quickack": false, 00:20:20.333 "enable_placement_id": 0, 00:20:20.333 "enable_zerocopy_send_server": true, 00:20:20.333 "enable_zerocopy_send_client": false, 00:20:20.333 "zerocopy_threshold": 0, 00:20:20.333 "tls_version": 0, 00:20:20.333 "enable_ktls": false 00:20:20.333 } 00:20:20.333 }, 00:20:20.333 { 00:20:20.333 "method": "sock_impl_set_options", 00:20:20.333 "params": { 00:20:20.333 "impl_name": "posix", 00:20:20.333 "recv_buf_size": 2097152, 00:20:20.333 "send_buf_size": 2097152, 00:20:20.333 "enable_recv_pipe": true, 00:20:20.333 "enable_quickack": false, 00:20:20.333 "enable_placement_id": 0, 00:20:20.333 "enable_zerocopy_send_server": true, 00:20:20.333 "enable_zerocopy_send_client": false, 00:20:20.333 "zerocopy_threshold": 0, 00:20:20.333 "tls_version": 0, 00:20:20.333 "enable_ktls": false 00:20:20.333 } 00:20:20.333 } 00:20:20.333 ] 00:20:20.333 }, 00:20:20.333 { 00:20:20.333 "subsystem": "vmd", 00:20:20.333 "config": [] 00:20:20.333 }, 00:20:20.333 { 00:20:20.333 "subsystem": "accel", 00:20:20.333 "config": [ 00:20:20.333 { 00:20:20.333 "method": "accel_set_options", 00:20:20.333 "params": { 00:20:20.333 "small_cache_size": 128, 00:20:20.333 "large_cache_size": 16, 00:20:20.333 "task_count": 2048, 00:20:20.333 "sequence_count": 2048, 00:20:20.333 "buf_count": 2048 00:20:20.333 } 00:20:20.333 } 00:20:20.333 ] 00:20:20.333 }, 00:20:20.333 { 00:20:20.333 "subsystem": "bdev", 00:20:20.334 "config": [ 00:20:20.334 { 00:20:20.334 "method": "bdev_set_options", 00:20:20.334 "params": { 00:20:20.334 "bdev_io_pool_size": 65535, 00:20:20.334 "bdev_io_cache_size": 256, 00:20:20.334 "bdev_auto_examine": true, 00:20:20.334 "iobuf_small_cache_size": 128, 00:20:20.334 "iobuf_large_cache_size": 16 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "bdev_raid_set_options", 00:20:20.334 "params": { 00:20:20.334 "process_window_size_kb": 1024, 00:20:20.334 "process_max_bandwidth_mb_sec": 0 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "bdev_iscsi_set_options", 00:20:20.334 "params": { 00:20:20.334 "timeout_sec": 30 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "bdev_nvme_set_options", 00:20:20.334 "params": { 00:20:20.334 "action_on_timeout": "none", 00:20:20.334 "timeout_us": 0, 00:20:20.334 "timeout_admin_us": 0, 00:20:20.334 "keep_alive_timeout_ms": 10000, 00:20:20.334 "arbitration_burst": 0, 00:20:20.334 "low_priority_weight": 0, 00:20:20.334 "medium_priority_weight": 0, 00:20:20.334 "high_priority_weight": 0, 00:20:20.334 "nvme_adminq_poll_period_us": 10000, 00:20:20.334 "nvme_ioq_poll_period_us": 0, 00:20:20.334 "io_queue_requests": 0, 00:20:20.334 "delay_cmd_submit": true, 00:20:20.334 "transport_retry_count": 4, 00:20:20.334 "bdev_retry_count": 3, 00:20:20.334 "transport_ack_timeout": 0, 00:20:20.334 "ctrlr_loss_timeout_sec": 0, 00:20:20.334 "reconnect_delay_sec": 0, 00:20:20.334 "fast_io_fail_timeout_sec": 0, 00:20:20.334 "disable_auto_failback": false, 00:20:20.334 "generate_uuids": false, 00:20:20.334 "transport_tos": 0, 00:20:20.334 "nvme_error_stat": false, 00:20:20.334 "rdma_srq_size": 0, 00:20:20.334 "io_path_stat": false, 00:20:20.334 "allow_accel_sequence": false, 00:20:20.334 "rdma_max_cq_size": 0, 00:20:20.334 "rdma_cm_event_timeout_ms": 0, 00:20:20.334 "dhchap_digests": [ 00:20:20.334 "sha256", 00:20:20.334 "sha384", 00:20:20.334 "sha512" 00:20:20.334 ], 00:20:20.334 "dhchap_dhgroups": [ 00:20:20.334 "null", 00:20:20.334 "ffdhe2048", 00:20:20.334 "ffdhe3072", 00:20:20.334 "ffdhe4096", 00:20:20.334 "ffdhe6144", 00:20:20.334 "ffdhe8192" 00:20:20.334 ] 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "bdev_nvme_set_hotplug", 00:20:20.334 "params": { 00:20:20.334 "period_us": 100000, 00:20:20.334 "enable": false 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "bdev_malloc_create", 00:20:20.334 "params": { 00:20:20.334 "name": "malloc0", 00:20:20.334 "num_blocks": 8192, 00:20:20.334 "block_size": 4096, 00:20:20.334 "physical_block_size": 4096, 00:20:20.334 "uuid": "a1a31a97-0fff-47b4-a73b-4cbabc8fa69f", 00:20:20.334 "optimal_io_boundary": 0, 00:20:20.334 "md_size": 0, 00:20:20.334 "dif_type": 0, 00:20:20.334 "dif_is_head_of_md": false, 00:20:20.334 "dif_pi_format": 0 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "bdev_wait_for_examine" 00:20:20.334 } 00:20:20.334 ] 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "subsystem": "nbd", 00:20:20.334 "config": [] 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "subsystem": "scheduler", 00:20:20.334 "config": [ 00:20:20.334 { 00:20:20.334 "method": "framework_set_scheduler", 00:20:20.334 "params": { 00:20:20.334 "name": "static" 00:20:20.334 } 00:20:20.334 } 00:20:20.334 ] 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "subsystem": "nvmf", 00:20:20.334 "config": [ 00:20:20.334 { 00:20:20.334 "method": "nvmf_set_config", 00:20:20.334 "params": { 00:20:20.334 "discovery_filter": "match_any", 00:20:20.334 "admin_cmd_passthru": { 00:20:20.334 "identify_ctrlr": false 00:20:20.334 }, 00:20:20.334 "dhchap_digests": [ 00:20:20.334 "sha256", 00:20:20.334 "sha384", 00:20:20.334 "sha512" 00:20:20.334 ], 00:20:20.334 "dhchap_dhgroups": [ 00:20:20.334 "null", 00:20:20.334 "ffdhe2048", 00:20:20.334 "ffdhe3072", 00:20:20.334 "ffdhe4096", 00:20:20.334 "ffdhe6144", 00:20:20.334 "ffdhe8192" 00:20:20.334 ] 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "nvmf_set_max_subsystems", 00:20:20.334 "params": { 00:20:20.334 "max_subsystems": 1024 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "nvmf_set_crdt", 00:20:20.334 "params": { 00:20:20.334 "crdt1": 0, 00:20:20.334 "crdt2": 0, 00:20:20.334 "crdt3": 0 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "nvmf_create_transport", 00:20:20.334 "params": { 00:20:20.334 "trtype": "TCP", 00:20:20.334 "max_queue_depth": 128, 00:20:20.334 "max_io_qpairs_per_ctrlr": 127, 00:20:20.334 "in_capsule_data_size": 4096, 00:20:20.334 "max_io_size": 131072, 00:20:20.334 "io_unit_size": 131072, 00:20:20.334 "max_aq_depth": 128, 00:20:20.334 "num_shared_buffers": 511, 00:20:20.334 "buf_cache_size": 4294967295, 00:20:20.334 "dif_insert_or_strip": false, 00:20:20.334 "zcopy": false, 00:20:20.334 "c2h_success": false, 00:20:20.334 "sock_priority": 0, 00:20:20.334 "abort_timeout_sec": 1, 00:20:20.334 "ack_timeout": 0, 00:20:20.334 "data_wr_pool_size": 0 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "nvmf_create_subsystem", 00:20:20.334 "params": { 00:20:20.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.334 "allow_any_host": false, 00:20:20.334 "serial_number": "00000000000000000000", 00:20:20.334 "model_number": "SPDK bdev Controller", 00:20:20.334 "max_namespaces": 32, 00:20:20.334 "min_cntlid": 1, 00:20:20.334 "max_cntlid": 65519, 00:20:20.334 "ana_reporting": false 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "nvmf_subsystem_add_host", 00:20:20.334 "params": { 00:20:20.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.334 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.334 "psk": "key0" 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "nvmf_subsystem_add_ns", 00:20:20.334 "params": { 00:20:20.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.334 "namespace": { 00:20:20.334 "nsid": 1, 00:20:20.334 "bdev_name": "malloc0", 00:20:20.334 "nguid": "A1A31A970FFF47B4A73B4CBABC8FA69F", 00:20:20.334 "uuid": "a1a31a97-0fff-47b4-a73b-4cbabc8fa69f", 00:20:20.334 "no_auto_visible": false 00:20:20.334 } 00:20:20.334 } 00:20:20.334 }, 00:20:20.334 { 00:20:20.334 "method": "nvmf_subsystem_add_listener", 00:20:20.334 "params": { 00:20:20.334 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.334 "listen_address": { 00:20:20.334 "trtype": "TCP", 00:20:20.334 "adrfam": "IPv4", 00:20:20.334 "traddr": "10.0.0.2", 00:20:20.334 "trsvcid": "4420" 00:20:20.334 }, 00:20:20.334 "secure_channel": false, 00:20:20.334 "sock_impl": "ssl" 00:20:20.334 } 00:20:20.334 } 00:20:20.334 ] 00:20:20.334 } 00:20:20.334 ] 00:20:20.334 }' 00:20:20.334 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1751106 00:20:20.334 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1751106 00:20:20.334 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:20.334 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1751106 ']' 00:20:20.334 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.334 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.334 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.334 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.334 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.334 [2024-11-06 13:17:02.199162] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:20.334 [2024-11-06 13:17:02.199213] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.595 [2024-11-06 13:17:02.289101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.595 [2024-11-06 13:17:02.318142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.595 [2024-11-06 13:17:02.318174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.595 [2024-11-06 13:17:02.318181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.595 [2024-11-06 13:17:02.318185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.595 [2024-11-06 13:17:02.318189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.595 [2024-11-06 13:17:02.318684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.854 [2024-11-06 13:17:02.511461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.854 [2024-11-06 13:17:02.543490] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.854 [2024-11-06 13:17:02.543684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.114 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.114 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:21.114 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.114 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.114 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1751140 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1751140 /var/tmp/bdevperf.sock 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1751140 ']' 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.375 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:21.375 "subsystems": [ 00:20:21.375 { 00:20:21.375 "subsystem": "keyring", 00:20:21.375 "config": [ 00:20:21.375 { 00:20:21.375 "method": "keyring_file_add_key", 00:20:21.375 "params": { 00:20:21.375 "name": "key0", 00:20:21.375 "path": "/tmp/tmp.qz2gFpl54n" 00:20:21.375 } 00:20:21.375 } 00:20:21.375 ] 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "subsystem": "iobuf", 00:20:21.375 "config": [ 00:20:21.375 { 00:20:21.375 "method": "iobuf_set_options", 00:20:21.375 "params": { 00:20:21.375 "small_pool_count": 8192, 00:20:21.375 "large_pool_count": 1024, 00:20:21.375 "small_bufsize": 8192, 00:20:21.375 "large_bufsize": 135168, 00:20:21.375 "enable_numa": false 00:20:21.375 } 00:20:21.375 } 00:20:21.375 ] 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "subsystem": "sock", 00:20:21.375 "config": [ 00:20:21.375 { 00:20:21.375 "method": "sock_set_default_impl", 00:20:21.375 "params": { 00:20:21.375 "impl_name": "posix" 00:20:21.375 } 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "method": "sock_impl_set_options", 00:20:21.375 "params": { 00:20:21.375 "impl_name": "ssl", 00:20:21.375 "recv_buf_size": 4096, 00:20:21.375 "send_buf_size": 4096, 00:20:21.375 "enable_recv_pipe": true, 00:20:21.375 "enable_quickack": false, 00:20:21.375 "enable_placement_id": 0, 00:20:21.375 "enable_zerocopy_send_server": true, 00:20:21.375 "enable_zerocopy_send_client": false, 00:20:21.375 "zerocopy_threshold": 0, 00:20:21.375 "tls_version": 0, 00:20:21.375 "enable_ktls": false 00:20:21.375 } 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "method": "sock_impl_set_options", 00:20:21.375 "params": { 00:20:21.375 "impl_name": "posix", 00:20:21.375 "recv_buf_size": 2097152, 00:20:21.375 "send_buf_size": 2097152, 00:20:21.375 "enable_recv_pipe": true, 00:20:21.375 "enable_quickack": false, 00:20:21.375 "enable_placement_id": 0, 00:20:21.375 "enable_zerocopy_send_server": true, 00:20:21.375 "enable_zerocopy_send_client": false, 00:20:21.375 "zerocopy_threshold": 0, 00:20:21.375 "tls_version": 0, 00:20:21.375 "enable_ktls": false 00:20:21.375 } 00:20:21.375 } 00:20:21.375 ] 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "subsystem": "vmd", 00:20:21.375 "config": [] 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "subsystem": "accel", 00:20:21.375 "config": [ 00:20:21.375 { 00:20:21.375 "method": "accel_set_options", 00:20:21.375 "params": { 00:20:21.375 "small_cache_size": 128, 00:20:21.375 "large_cache_size": 16, 00:20:21.375 "task_count": 2048, 00:20:21.375 "sequence_count": 2048, 00:20:21.375 "buf_count": 2048 00:20:21.375 } 00:20:21.375 } 00:20:21.375 ] 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "subsystem": "bdev", 00:20:21.375 "config": [ 00:20:21.375 { 00:20:21.375 "method": "bdev_set_options", 00:20:21.375 "params": { 00:20:21.375 "bdev_io_pool_size": 65535, 00:20:21.375 "bdev_io_cache_size": 256, 00:20:21.375 "bdev_auto_examine": true, 00:20:21.375 "iobuf_small_cache_size": 128, 00:20:21.375 "iobuf_large_cache_size": 16 00:20:21.375 } 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "method": "bdev_raid_set_options", 00:20:21.375 "params": { 00:20:21.375 "process_window_size_kb": 1024, 00:20:21.375 "process_max_bandwidth_mb_sec": 0 00:20:21.375 } 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "method": "bdev_iscsi_set_options", 00:20:21.375 "params": { 00:20:21.375 "timeout_sec": 30 00:20:21.375 } 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "method": "bdev_nvme_set_options", 00:20:21.375 "params": { 00:20:21.375 "action_on_timeout": "none", 00:20:21.375 "timeout_us": 0, 00:20:21.375 "timeout_admin_us": 0, 00:20:21.375 "keep_alive_timeout_ms": 10000, 00:20:21.375 "arbitration_burst": 0, 00:20:21.375 "low_priority_weight": 0, 00:20:21.375 "medium_priority_weight": 0, 00:20:21.375 "high_priority_weight": 0, 00:20:21.375 "nvme_adminq_poll_period_us": 10000, 00:20:21.375 "nvme_ioq_poll_period_us": 0, 00:20:21.375 "io_queue_requests": 512, 00:20:21.375 "delay_cmd_submit": true, 00:20:21.375 "transport_retry_count": 4, 00:20:21.375 "bdev_retry_count": 3, 00:20:21.375 "transport_ack_timeout": 0, 00:20:21.375 "ctrlr_loss_timeout_sec": 0, 00:20:21.375 "reconnect_delay_sec": 0, 00:20:21.375 "fast_io_fail_timeout_sec": 0, 00:20:21.375 "disable_auto_failback": false, 00:20:21.375 "generate_uuids": false, 00:20:21.375 "transport_tos": 0, 00:20:21.375 "nvme_error_stat": false, 00:20:21.375 "rdma_srq_size": 0, 00:20:21.375 "io_path_stat": false, 00:20:21.375 "allow_accel_sequence": false, 00:20:21.375 "rdma_max_cq_size": 0, 00:20:21.375 "rdma_cm_event_timeout_ms": 0, 00:20:21.375 "dhchap_digests": [ 00:20:21.375 "sha256", 00:20:21.375 "sha384", 00:20:21.375 "sha512" 00:20:21.375 ], 00:20:21.375 "dhchap_dhgroups": [ 00:20:21.375 "null", 00:20:21.375 "ffdhe2048", 00:20:21.375 "ffdhe3072", 00:20:21.375 "ffdhe4096", 00:20:21.375 "ffdhe6144", 00:20:21.375 "ffdhe8192" 00:20:21.375 ] 00:20:21.375 } 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "method": "bdev_nvme_attach_controller", 00:20:21.375 "params": { 00:20:21.375 "name": "nvme0", 00:20:21.375 "trtype": "TCP", 00:20:21.375 "adrfam": "IPv4", 00:20:21.375 "traddr": "10.0.0.2", 00:20:21.375 "trsvcid": "4420", 00:20:21.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.375 "prchk_reftag": false, 00:20:21.375 "prchk_guard": false, 00:20:21.375 "ctrlr_loss_timeout_sec": 0, 00:20:21.375 "reconnect_delay_sec": 0, 00:20:21.375 "fast_io_fail_timeout_sec": 0, 00:20:21.375 "psk": "key0", 00:20:21.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.375 "hdgst": false, 00:20:21.375 "ddgst": false, 00:20:21.375 "multipath": "multipath" 00:20:21.375 } 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "method": "bdev_nvme_set_hotplug", 00:20:21.375 "params": { 00:20:21.375 "period_us": 100000, 00:20:21.375 "enable": false 00:20:21.375 } 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "method": "bdev_enable_histogram", 00:20:21.375 "params": { 00:20:21.375 "name": "nvme0n1", 00:20:21.375 "enable": true 00:20:21.375 } 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "method": "bdev_wait_for_examine" 00:20:21.375 } 00:20:21.375 ] 00:20:21.375 }, 00:20:21.375 { 00:20:21.375 "subsystem": "nbd", 00:20:21.375 "config": [] 00:20:21.375 } 00:20:21.375 ] 00:20:21.375 }' 00:20:21.375 [2024-11-06 13:17:03.090919] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:21.375 [2024-11-06 13:17:03.090971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1751140 ] 00:20:21.375 [2024-11-06 13:17:03.174785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.375 [2024-11-06 13:17:03.204193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.636 [2024-11-06 13:17:03.339151] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.206 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.206 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:22.206 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:22.206 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:22.206 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.206 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.466 Running I/O for 1 seconds... 00:20:23.406 4782.00 IOPS, 18.68 MiB/s 00:20:23.406 Latency(us) 00:20:23.406 [2024-11-06T12:17:05.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.406 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:23.406 Verification LBA range: start 0x0 length 0x2000 00:20:23.406 nvme0n1 : 1.01 4847.15 18.93 0.00 0.00 26233.60 4478.29 30146.56 00:20:23.406 [2024-11-06T12:17:05.308Z] =================================================================================================================== 00:20:23.406 [2024-11-06T12:17:05.308Z] Total : 4847.15 18.93 0.00 0.00 26233.60 4478.29 30146.56 00:20:23.406 { 00:20:23.406 "results": [ 00:20:23.406 { 00:20:23.406 "job": "nvme0n1", 00:20:23.406 "core_mask": "0x2", 00:20:23.406 "workload": "verify", 00:20:23.406 "status": "finished", 00:20:23.406 "verify_range": { 00:20:23.406 "start": 0, 00:20:23.406 "length": 8192 00:20:23.406 }, 00:20:23.406 "queue_depth": 128, 00:20:23.406 "io_size": 4096, 00:20:23.406 "runtime": 1.013172, 00:20:23.406 "iops": 4847.153296774882, 00:20:23.406 "mibps": 18.93419256552688, 00:20:23.406 "io_failed": 0, 00:20:23.406 "io_timeout": 0, 00:20:23.406 "avg_latency_us": 26233.59973936062, 00:20:23.406 "min_latency_us": 4478.293333333333, 00:20:23.406 "max_latency_us": 30146.56 00:20:23.406 } 00:20:23.406 ], 00:20:23.406 "core_count": 1 00:20:23.406 } 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:23.406 nvmf_trace.0 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1751140 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1751140 ']' 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1751140 00:20:23.406 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:23.667 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:23.667 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1751140 00:20:23.667 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:23.667 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:23.667 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1751140' 00:20:23.668 killing process with pid 1751140 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1751140 00:20:23.668 Received shutdown signal, test time was about 1.000000 seconds 00:20:23.668 00:20:23.668 Latency(us) 00:20:23.668 [2024-11-06T12:17:05.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.668 [2024-11-06T12:17:05.570Z] =================================================================================================================== 00:20:23.668 [2024-11-06T12:17:05.570Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1751140 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:23.668 rmmod nvme_tcp 00:20:23.668 rmmod nvme_fabrics 00:20:23.668 rmmod nvme_keyring 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1751106 ']' 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1751106 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1751106 ']' 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1751106 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:23.668 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1751106 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1751106' 00:20:23.929 killing process with pid 1751106 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1751106 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1751106 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.929 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.472 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:26.472 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Ilr0TvY1gn /tmp/tmp.Z5W92Wv1qz /tmp/tmp.qz2gFpl54n 00:20:26.472 00:20:26.472 real 1m28.189s 00:20:26.472 user 2m19.277s 00:20:26.472 sys 0m27.162s 00:20:26.472 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:26.472 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.472 ************************************ 00:20:26.472 END TEST nvmf_tls 00:20:26.472 ************************************ 00:20:26.472 13:17:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:26.473 13:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:26.473 13:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:26.473 13:17:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.473 ************************************ 00:20:26.473 START TEST nvmf_fips 00:20:26.473 ************************************ 00:20:26.473 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:26.473 * Looking for test storage... 00:20:26.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:26.473 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:26.473 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:26.473 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:26.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.473 --rc genhtml_branch_coverage=1 00:20:26.473 --rc genhtml_function_coverage=1 00:20:26.473 --rc genhtml_legend=1 00:20:26.473 --rc geninfo_all_blocks=1 00:20:26.473 --rc geninfo_unexecuted_blocks=1 00:20:26.473 00:20:26.473 ' 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:26.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.473 --rc genhtml_branch_coverage=1 00:20:26.473 --rc genhtml_function_coverage=1 00:20:26.473 --rc genhtml_legend=1 00:20:26.473 --rc geninfo_all_blocks=1 00:20:26.473 --rc geninfo_unexecuted_blocks=1 00:20:26.473 00:20:26.473 ' 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:26.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.473 --rc genhtml_branch_coverage=1 00:20:26.473 --rc genhtml_function_coverage=1 00:20:26.473 --rc genhtml_legend=1 00:20:26.473 --rc geninfo_all_blocks=1 00:20:26.473 --rc geninfo_unexecuted_blocks=1 00:20:26.473 00:20:26.473 ' 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:26.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.473 --rc genhtml_branch_coverage=1 00:20:26.473 --rc genhtml_function_coverage=1 00:20:26.473 --rc genhtml_legend=1 00:20:26.473 --rc geninfo_all_blocks=1 00:20:26.473 --rc geninfo_unexecuted_blocks=1 00:20:26.473 00:20:26.473 ' 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:26.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:26.473 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:26.474 Error setting digest 00:20:26.474 40D2355A447F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:26.474 40D2355A447F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:26.474 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:34.610 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:34.610 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:34.610 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:34.611 Found net devices under 0000:31:00.0: cvl_0_0 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:34.611 Found net devices under 0000:31:00.1: cvl_0_1 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:34.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:20:34.611 00:20:34.611 --- 10.0.0.2 ping statistics --- 00:20:34.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.611 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:20:34.611 00:20:34.611 --- 10.0.0.1 ping statistics --- 00:20:34.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.611 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1755964 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1755964 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1755964 ']' 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:34.611 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.611 [2024-11-06 13:17:16.028801] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:34.611 [2024-11-06 13:17:16.028878] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.611 [2024-11-06 13:17:16.129436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.611 [2024-11-06 13:17:16.179954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.611 [2024-11-06 13:17:16.180006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.611 [2024-11-06 13:17:16.180014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.611 [2024-11-06 13:17:16.180022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.611 [2024-11-06 13:17:16.180028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.611 [2024-11-06 13:17:16.180843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.vMX 00:20:35.182 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:35.183 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.vMX 00:20:35.183 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.vMX 00:20:35.183 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.vMX 00:20:35.183 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:35.183 [2024-11-06 13:17:17.040409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.183 [2024-11-06 13:17:17.056404] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.183 [2024-11-06 13:17:17.056682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.443 malloc0 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1756233 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1756233 /var/tmp/bdevperf.sock 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1756233 ']' 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:35.443 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:35.443 [2024-11-06 13:17:17.208515] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:35.443 [2024-11-06 13:17:17.208596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756233 ] 00:20:35.443 [2024-11-06 13:17:17.304364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.704 [2024-11-06 13:17:17.355492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.276 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.276 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:36.276 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.vMX 00:20:36.536 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:36.536 [2024-11-06 13:17:18.389082] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.797 TLSTESTn1 00:20:36.797 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:36.797 Running I/O for 10 seconds... 00:20:39.118 3402.00 IOPS, 13.29 MiB/s [2024-11-06T12:17:21.957Z] 4072.50 IOPS, 15.91 MiB/s [2024-11-06T12:17:22.898Z] 4590.33 IOPS, 17.93 MiB/s [2024-11-06T12:17:23.838Z] 5057.50 IOPS, 19.76 MiB/s [2024-11-06T12:17:24.778Z] 5300.40 IOPS, 20.70 MiB/s [2024-11-06T12:17:25.718Z] 5384.67 IOPS, 21.03 MiB/s [2024-11-06T12:17:26.659Z] 5418.00 IOPS, 21.16 MiB/s [2024-11-06T12:17:28.042Z] 5431.50 IOPS, 21.22 MiB/s [2024-11-06T12:17:28.979Z] 5398.44 IOPS, 21.09 MiB/s [2024-11-06T12:17:28.979Z] 5429.60 IOPS, 21.21 MiB/s 00:20:47.078 Latency(us) 00:20:47.078 [2024-11-06T12:17:28.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.078 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:47.078 Verification LBA range: start 0x0 length 0x2000 00:20:47.078 TLSTESTn1 : 10.02 5434.11 21.23 0.00 0.00 23520.79 6007.47 28835.84 00:20:47.078 [2024-11-06T12:17:28.980Z] =================================================================================================================== 00:20:47.078 [2024-11-06T12:17:28.980Z] Total : 5434.11 21.23 0.00 0.00 23520.79 6007.47 28835.84 00:20:47.078 { 00:20:47.078 "results": [ 00:20:47.078 { 00:20:47.078 "job": "TLSTESTn1", 00:20:47.078 "core_mask": "0x4", 00:20:47.078 "workload": "verify", 00:20:47.078 "status": "finished", 00:20:47.078 "verify_range": { 00:20:47.078 "start": 0, 00:20:47.078 "length": 8192 00:20:47.078 }, 00:20:47.078 "queue_depth": 128, 00:20:47.078 "io_size": 4096, 00:20:47.078 "runtime": 10.01508, 00:20:47.078 "iops": 5434.105369103392, 00:20:47.078 "mibps": 21.226974098060126, 00:20:47.078 "io_failed": 0, 00:20:47.078 "io_timeout": 0, 00:20:47.078 "avg_latency_us": 23520.789227103738, 00:20:47.078 "min_latency_us": 6007.466666666666, 00:20:47.078 "max_latency_us": 28835.84 00:20:47.078 } 00:20:47.078 ], 00:20:47.078 "core_count": 1 00:20:47.078 } 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:47.078 nvmf_trace.0 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1756233 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1756233 ']' 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1756233 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1756233 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1756233' 00:20:47.078 killing process with pid 1756233 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1756233 00:20:47.078 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.078 00:20:47.078 Latency(us) 00:20:47.078 [2024-11-06T12:17:28.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.078 [2024-11-06T12:17:28.980Z] =================================================================================================================== 00:20:47.078 [2024-11-06T12:17:28.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1756233 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.078 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.078 rmmod nvme_tcp 00:20:47.078 rmmod nvme_fabrics 00:20:47.078 rmmod nvme_keyring 00:20:47.338 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.338 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:47.338 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:47.338 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1755964 ']' 00:20:47.338 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1755964 00:20:47.338 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1755964 ']' 00:20:47.338 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1755964 00:20:47.338 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1755964 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1755964' 00:20:47.338 killing process with pid 1755964 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1755964 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1755964 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.338 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.vMX 00:20:49.890 00:20:49.890 real 0m23.374s 00:20:49.890 user 0m25.199s 00:20:49.890 sys 0m9.578s 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.890 ************************************ 00:20:49.890 END TEST nvmf_fips 00:20:49.890 ************************************ 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:49.890 ************************************ 00:20:49.890 START TEST nvmf_control_msg_list 00:20:49.890 ************************************ 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:49.890 * Looking for test storage... 00:20:49.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:49.890 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:49.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.891 --rc genhtml_branch_coverage=1 00:20:49.891 --rc genhtml_function_coverage=1 00:20:49.891 --rc genhtml_legend=1 00:20:49.891 --rc geninfo_all_blocks=1 00:20:49.891 --rc geninfo_unexecuted_blocks=1 00:20:49.891 00:20:49.891 ' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:49.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.891 --rc genhtml_branch_coverage=1 00:20:49.891 --rc genhtml_function_coverage=1 00:20:49.891 --rc genhtml_legend=1 00:20:49.891 --rc geninfo_all_blocks=1 00:20:49.891 --rc geninfo_unexecuted_blocks=1 00:20:49.891 00:20:49.891 ' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:49.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.891 --rc genhtml_branch_coverage=1 00:20:49.891 --rc genhtml_function_coverage=1 00:20:49.891 --rc genhtml_legend=1 00:20:49.891 --rc geninfo_all_blocks=1 00:20:49.891 --rc geninfo_unexecuted_blocks=1 00:20:49.891 00:20:49.891 ' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:49.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.891 --rc genhtml_branch_coverage=1 00:20:49.891 --rc genhtml_function_coverage=1 00:20:49.891 --rc genhtml_legend=1 00:20:49.891 --rc geninfo_all_blocks=1 00:20:49.891 --rc geninfo_unexecuted_blocks=1 00:20:49.891 00:20:49.891 ' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:49.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:49.891 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:49.892 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:58.148 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.148 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:58.149 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:58.149 Found net devices under 0000:31:00.0: cvl_0_0 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:58.149 Found net devices under 0000:31:00.1: cvl_0_1 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:58.149 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:58.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:20:58.149 00:20:58.149 --- 10.0.0.2 ping statistics --- 00:20:58.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.149 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:20:58.149 00:20:58.149 --- 10.0.0.1 ping statistics --- 00:20:58.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.149 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1762778 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1762778 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 1762778 ']' 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:58.149 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:58.149 [2024-11-06 13:17:39.227594] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:20:58.149 [2024-11-06 13:17:39.227658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.149 [2024-11-06 13:17:39.326922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.149 [2024-11-06 13:17:39.378200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.149 [2024-11-06 13:17:39.378251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.149 [2024-11-06 13:17:39.378259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.149 [2024-11-06 13:17:39.378267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.149 [2024-11-06 13:17:39.378273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.149 [2024-11-06 13:17:39.379077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:58.411 [2024-11-06 13:17:40.109399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:58.411 Malloc0 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:58.411 [2024-11-06 13:17:40.164536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1762972 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1762973 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1762974 00:20:58.411 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1762972 00:20:58.412 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:58.412 [2024-11-06 13:17:40.255113] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:58.412 [2024-11-06 13:17:40.265350] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:58.412 [2024-11-06 13:17:40.265669] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:59.798 Initializing NVMe Controllers 00:20:59.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:59.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:59.798 Initialization complete. Launching workers. 00:20:59.798 ======================================================== 00:20:59.798 Latency(us) 00:20:59.799 Device Information : IOPS MiB/s Average min max 00:20:59.799 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40924.99 40793.81 41383.73 00:20:59.799 ======================================================== 00:20:59.799 Total : 25.00 0.10 40924.99 40793.81 41383.73 00:20:59.799 00:20:59.799 Initializing NVMe Controllers 00:20:59.799 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:59.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:59.799 Initialization complete. Launching workers. 00:20:59.799 ======================================================== 00:20:59.799 Latency(us) 00:20:59.799 Device Information : IOPS MiB/s Average min max 00:20:59.799 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1755.99 6.86 569.33 160.50 792.32 00:20:59.799 ======================================================== 00:20:59.799 Total : 1755.99 6.86 569.33 160.50 792.32 00:20:59.799 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1762973 00:20:59.799 Initializing NVMe Controllers 00:20:59.799 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:59.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:59.799 Initialization complete. Launching workers. 00:20:59.799 ======================================================== 00:20:59.799 Latency(us) 00:20:59.799 Device Information : IOPS MiB/s Average min max 00:20:59.799 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40917.82 40791.18 41291.94 00:20:59.799 ======================================================== 00:20:59.799 Total : 25.00 0.10 40917.82 40791.18 41291.94 00:20:59.799 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1762974 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.799 rmmod nvme_tcp 00:20:59.799 rmmod nvme_fabrics 00:20:59.799 rmmod nvme_keyring 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1762778 ']' 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1762778 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 1762778 ']' 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 1762778 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1762778 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1762778' 00:20:59.799 killing process with pid 1762778 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 1762778 00:20:59.799 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 1762778 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.062 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.605 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:02.605 00:21:02.605 real 0m12.539s 00:21:02.605 user 0m8.248s 00:21:02.605 sys 0m6.616s 00:21:02.605 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:02.605 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:02.605 ************************************ 00:21:02.605 END TEST nvmf_control_msg_list 00:21:02.605 ************************************ 00:21:02.605 13:17:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:02.605 13:17:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:02.605 13:17:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:02.605 13:17:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:02.605 ************************************ 00:21:02.605 START TEST nvmf_wait_for_buf 00:21:02.605 ************************************ 00:21:02.605 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:02.605 * Looking for test storage... 00:21:02.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:02.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.605 --rc genhtml_branch_coverage=1 00:21:02.605 --rc genhtml_function_coverage=1 00:21:02.605 --rc genhtml_legend=1 00:21:02.605 --rc geninfo_all_blocks=1 00:21:02.605 --rc geninfo_unexecuted_blocks=1 00:21:02.605 00:21:02.605 ' 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:02.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.605 --rc genhtml_branch_coverage=1 00:21:02.605 --rc genhtml_function_coverage=1 00:21:02.605 --rc genhtml_legend=1 00:21:02.605 --rc geninfo_all_blocks=1 00:21:02.605 --rc geninfo_unexecuted_blocks=1 00:21:02.605 00:21:02.605 ' 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:02.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.605 --rc genhtml_branch_coverage=1 00:21:02.605 --rc genhtml_function_coverage=1 00:21:02.605 --rc genhtml_legend=1 00:21:02.605 --rc geninfo_all_blocks=1 00:21:02.605 --rc geninfo_unexecuted_blocks=1 00:21:02.605 00:21:02.605 ' 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:02.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.605 --rc genhtml_branch_coverage=1 00:21:02.605 --rc genhtml_function_coverage=1 00:21:02.605 --rc genhtml_legend=1 00:21:02.605 --rc geninfo_all_blocks=1 00:21:02.605 --rc geninfo_unexecuted_blocks=1 00:21:02.605 00:21:02.605 ' 00:21:02.605 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.606 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:10.744 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:10.744 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:10.744 Found net devices under 0000:31:00.0: cvl_0_0 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:10.744 Found net devices under 0000:31:00.1: cvl_0_1 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.744 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:21:10.745 00:21:10.745 --- 10.0.0.2 ping statistics --- 00:21:10.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.745 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:21:10.745 00:21:10.745 --- 10.0.0.1 ping statistics --- 00:21:10.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.745 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1767489 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1767489 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 1767489 ']' 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.745 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:10.745 [2024-11-06 13:17:51.913375] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:21:10.745 [2024-11-06 13:17:51.913442] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.745 [2024-11-06 13:17:52.015919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.745 [2024-11-06 13:17:52.067006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.745 [2024-11-06 13:17:52.067058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.745 [2024-11-06 13:17:52.067067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.745 [2024-11-06 13:17:52.067074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.745 [2024-11-06 13:17:52.067081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.745 [2024-11-06 13:17:52.067823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.006 Malloc0 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.006 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.006 [2024-11-06 13:17:52.904444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:11.267 [2024-11-06 13:17:52.932775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.267 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:11.267 [2024-11-06 13:17:53.038346] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:12.650 Initializing NVMe Controllers 00:21:12.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:12.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:12.650 Initialization complete. Launching workers. 00:21:12.650 ======================================================== 00:21:12.650 Latency(us) 00:21:12.650 Device Information : IOPS MiB/s Average min max 00:21:12.650 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.97 8011.74 63855.83 00:21:12.650 ======================================================== 00:21:12.650 Total : 129.00 16.12 32294.97 8011.74 63855.83 00:21:12.650 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.910 rmmod nvme_tcp 00:21:12.910 rmmod nvme_fabrics 00:21:12.910 rmmod nvme_keyring 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1767489 ']' 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1767489 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 1767489 ']' 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 1767489 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1767489 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1767489' 00:21:12.910 killing process with pid 1767489 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 1767489 00:21:12.910 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 1767489 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.171 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.711 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.711 00:21:15.711 real 0m13.050s 00:21:15.711 user 0m5.351s 00:21:15.711 sys 0m6.274s 00:21:15.711 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:15.711 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:15.711 ************************************ 00:21:15.711 END TEST nvmf_wait_for_buf 00:21:15.711 ************************************ 00:21:15.711 13:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:15.711 13:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:15.711 13:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:15.711 13:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:15.711 13:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.711 13:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:23.842 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.842 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.842 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.842 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.842 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.842 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.842 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:23.843 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:23.843 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:23.843 Found net devices under 0000:31:00.0: cvl_0_0 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:23.843 Found net devices under 0000:31:00.1: cvl_0_1 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:23.843 ************************************ 00:21:23.843 START TEST nvmf_perf_adq 00:21:23.843 ************************************ 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:23.843 * Looking for test storage... 00:21:23.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:23.843 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:23.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.844 --rc genhtml_branch_coverage=1 00:21:23.844 --rc genhtml_function_coverage=1 00:21:23.844 --rc genhtml_legend=1 00:21:23.844 --rc geninfo_all_blocks=1 00:21:23.844 --rc geninfo_unexecuted_blocks=1 00:21:23.844 00:21:23.844 ' 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:23.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.844 --rc genhtml_branch_coverage=1 00:21:23.844 --rc genhtml_function_coverage=1 00:21:23.844 --rc genhtml_legend=1 00:21:23.844 --rc geninfo_all_blocks=1 00:21:23.844 --rc geninfo_unexecuted_blocks=1 00:21:23.844 00:21:23.844 ' 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:23.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.844 --rc genhtml_branch_coverage=1 00:21:23.844 --rc genhtml_function_coverage=1 00:21:23.844 --rc genhtml_legend=1 00:21:23.844 --rc geninfo_all_blocks=1 00:21:23.844 --rc geninfo_unexecuted_blocks=1 00:21:23.844 00:21:23.844 ' 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:23.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.844 --rc genhtml_branch_coverage=1 00:21:23.844 --rc genhtml_function_coverage=1 00:21:23.844 --rc genhtml_legend=1 00:21:23.844 --rc geninfo_all_blocks=1 00:21:23.844 --rc geninfo_unexecuted_blocks=1 00:21:23.844 00:21:23.844 ' 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:23.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.844 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:30.424 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:30.424 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.424 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:30.425 Found net devices under 0000:31:00.0: cvl_0_0 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:30.425 Found net devices under 0000:31:00.1: cvl_0_1 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:30.425 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:31.807 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:34.348 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:39.633 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.633 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:39.634 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:39.634 Found net devices under 0000:31:00.0: cvl_0_0 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:39.634 Found net devices under 0000:31:00.1: cvl_0_1 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.634 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:39.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:21:39.634 00:21:39.634 --- 10.0.0.2 ping statistics --- 00:21:39.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.634 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:21:39.634 00:21:39.634 --- 10.0.0.1 ping statistics --- 00:21:39.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.634 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1778534 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1778534 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1778534 ']' 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.634 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:39.635 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.635 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:39.635 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.635 [2024-11-06 13:18:21.373135] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:21:39.635 [2024-11-06 13:18:21.373197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.635 [2024-11-06 13:18:21.474239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.635 [2024-11-06 13:18:21.528602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.635 [2024-11-06 13:18:21.528655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.635 [2024-11-06 13:18:21.528664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.635 [2024-11-06 13:18:21.528671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.635 [2024-11-06 13:18:21.528678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.635 [2024-11-06 13:18:21.530732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.635 [2024-11-06 13:18:21.530894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.635 [2024-11-06 13:18:21.530947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.635 [2024-11-06 13:18:21.530946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.576 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:40.576 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:40.576 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.576 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.577 [2024-11-06 13:18:22.399707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.577 Malloc1 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.577 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.837 [2024-11-06 13:18:22.479186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.837 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.837 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1778705 00:21:40.838 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:40.838 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:42.750 "tick_rate": 2400000000, 00:21:42.750 "poll_groups": [ 00:21:42.750 { 00:21:42.750 "name": "nvmf_tgt_poll_group_000", 00:21:42.750 "admin_qpairs": 1, 00:21:42.750 "io_qpairs": 1, 00:21:42.750 "current_admin_qpairs": 1, 00:21:42.750 "current_io_qpairs": 1, 00:21:42.750 "pending_bdev_io": 0, 00:21:42.750 "completed_nvme_io": 16359, 00:21:42.750 "transports": [ 00:21:42.750 { 00:21:42.750 "trtype": "TCP" 00:21:42.750 } 00:21:42.750 ] 00:21:42.750 }, 00:21:42.750 { 00:21:42.750 "name": "nvmf_tgt_poll_group_001", 00:21:42.750 "admin_qpairs": 0, 00:21:42.750 "io_qpairs": 1, 00:21:42.750 "current_admin_qpairs": 0, 00:21:42.750 "current_io_qpairs": 1, 00:21:42.750 "pending_bdev_io": 0, 00:21:42.750 "completed_nvme_io": 16148, 00:21:42.750 "transports": [ 00:21:42.750 { 00:21:42.750 "trtype": "TCP" 00:21:42.750 } 00:21:42.750 ] 00:21:42.750 }, 00:21:42.750 { 00:21:42.750 "name": "nvmf_tgt_poll_group_002", 00:21:42.750 "admin_qpairs": 0, 00:21:42.750 "io_qpairs": 1, 00:21:42.750 "current_admin_qpairs": 0, 00:21:42.750 "current_io_qpairs": 1, 00:21:42.750 "pending_bdev_io": 0, 00:21:42.750 "completed_nvme_io": 17104, 00:21:42.750 "transports": [ 00:21:42.750 { 00:21:42.750 "trtype": "TCP" 00:21:42.750 } 00:21:42.750 ] 00:21:42.750 }, 00:21:42.750 { 00:21:42.750 "name": "nvmf_tgt_poll_group_003", 00:21:42.750 "admin_qpairs": 0, 00:21:42.750 "io_qpairs": 1, 00:21:42.750 "current_admin_qpairs": 0, 00:21:42.750 "current_io_qpairs": 1, 00:21:42.750 "pending_bdev_io": 0, 00:21:42.750 "completed_nvme_io": 15760, 00:21:42.750 "transports": [ 00:21:42.750 { 00:21:42.750 "trtype": "TCP" 00:21:42.750 } 00:21:42.750 ] 00:21:42.750 } 00:21:42.750 ] 00:21:42.750 }' 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:42.750 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1778705 00:21:50.887 Initializing NVMe Controllers 00:21:50.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:50.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:50.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:50.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:50.887 Initialization complete. Launching workers. 00:21:50.887 ======================================================== 00:21:50.887 Latency(us) 00:21:50.887 Device Information : IOPS MiB/s Average min max 00:21:50.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12260.10 47.89 5220.89 1261.63 11178.31 00:21:50.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12922.80 50.48 4952.71 1342.87 12792.16 00:21:50.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13486.10 52.68 4745.23 1343.30 11502.49 00:21:50.887 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13220.20 51.64 4840.39 1239.99 12773.72 00:21:50.887 ======================================================== 00:21:50.887 Total : 51889.19 202.69 4933.53 1239.99 12792.16 00:21:50.887 00:21:50.887 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:50.887 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.887 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:50.887 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.887 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:50.887 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.887 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.887 rmmod nvme_tcp 00:21:50.887 rmmod nvme_fabrics 00:21:51.148 rmmod nvme_keyring 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1778534 ']' 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1778534 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1778534 ']' 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1778534 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1778534 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1778534' 00:21:51.148 killing process with pid 1778534 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1778534 00:21:51.148 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1778534 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.148 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.693 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.693 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:53.693 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:53.693 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:55.078 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:56.992 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:02.284 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.284 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:02.284 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:02.285 Found net devices under 0000:31:00.0: cvl_0_0 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:02.285 Found net devices under 0000:31:00.1: cvl_0_1 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.285 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.285 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.285 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.285 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.285 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.285 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.285 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.285 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:22:02.547 00:22:02.547 --- 10.0.0.2 ping statistics --- 00:22:02.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.547 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:22:02.547 00:22:02.547 --- 10.0.0.1 ping statistics --- 00:22:02.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.547 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:02.547 net.core.busy_poll = 1 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:02.547 net.core.busy_read = 1 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:02.547 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1783352 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1783352 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1783352 ']' 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:02.808 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:02.808 [2024-11-06 13:18:44.586759] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:22:02.808 [2024-11-06 13:18:44.586826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.808 [2024-11-06 13:18:44.686066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.069 [2024-11-06 13:18:44.738738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.069 [2024-11-06 13:18:44.738796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.069 [2024-11-06 13:18:44.738805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.069 [2024-11-06 13:18:44.738812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.069 [2024-11-06 13:18:44.738819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.069 [2024-11-06 13:18:44.740842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.069 [2024-11-06 13:18:44.741004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.069 [2024-11-06 13:18:44.741164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.069 [2024-11-06 13:18:44.741164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.641 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.903 [2024-11-06 13:18:45.605586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.903 Malloc1 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.903 [2024-11-06 13:18:45.686288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1783625 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:03.903 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:05.881 "tick_rate": 2400000000, 00:22:05.881 "poll_groups": [ 00:22:05.881 { 00:22:05.881 "name": "nvmf_tgt_poll_group_000", 00:22:05.881 "admin_qpairs": 1, 00:22:05.881 "io_qpairs": 2, 00:22:05.881 "current_admin_qpairs": 1, 00:22:05.881 "current_io_qpairs": 2, 00:22:05.881 "pending_bdev_io": 0, 00:22:05.881 "completed_nvme_io": 25048, 00:22:05.881 "transports": [ 00:22:05.881 { 00:22:05.881 "trtype": "TCP" 00:22:05.881 } 00:22:05.881 ] 00:22:05.881 }, 00:22:05.881 { 00:22:05.881 "name": "nvmf_tgt_poll_group_001", 00:22:05.881 "admin_qpairs": 0, 00:22:05.881 "io_qpairs": 2, 00:22:05.881 "current_admin_qpairs": 0, 00:22:05.881 "current_io_qpairs": 2, 00:22:05.881 "pending_bdev_io": 0, 00:22:05.881 "completed_nvme_io": 29722, 00:22:05.881 "transports": [ 00:22:05.881 { 00:22:05.881 "trtype": "TCP" 00:22:05.881 } 00:22:05.881 ] 00:22:05.881 }, 00:22:05.881 { 00:22:05.881 "name": "nvmf_tgt_poll_group_002", 00:22:05.881 "admin_qpairs": 0, 00:22:05.881 "io_qpairs": 0, 00:22:05.881 "current_admin_qpairs": 0, 00:22:05.881 "current_io_qpairs": 0, 00:22:05.881 "pending_bdev_io": 0, 00:22:05.881 "completed_nvme_io": 0, 00:22:05.881 "transports": [ 00:22:05.881 { 00:22:05.881 "trtype": "TCP" 00:22:05.881 } 00:22:05.881 ] 00:22:05.881 }, 00:22:05.881 { 00:22:05.881 "name": "nvmf_tgt_poll_group_003", 00:22:05.881 "admin_qpairs": 0, 00:22:05.881 "io_qpairs": 0, 00:22:05.881 "current_admin_qpairs": 0, 00:22:05.881 "current_io_qpairs": 0, 00:22:05.881 "pending_bdev_io": 0, 00:22:05.881 "completed_nvme_io": 0, 00:22:05.881 "transports": [ 00:22:05.881 { 00:22:05.881 "trtype": "TCP" 00:22:05.881 } 00:22:05.881 ] 00:22:05.881 } 00:22:05.881 ] 00:22:05.881 }' 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:05.881 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1783625 00:22:14.051 Initializing NVMe Controllers 00:22:14.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:14.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:14.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:14.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:14.051 Initialization complete. Launching workers. 00:22:14.051 ======================================================== 00:22:14.051 Latency(us) 00:22:14.051 Device Information : IOPS MiB/s Average min max 00:22:14.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9908.20 38.70 6460.10 962.84 51039.49 00:22:14.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8637.30 33.74 7409.78 1306.84 54546.19 00:22:14.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12022.80 46.96 5323.51 1061.98 55179.90 00:22:14.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7653.00 29.89 8364.62 1294.23 54669.71 00:22:14.051 ======================================================== 00:22:14.051 Total : 38221.30 149.30 6698.52 962.84 55179.90 00:22:14.051 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.051 rmmod nvme_tcp 00:22:14.051 rmmod nvme_fabrics 00:22:14.051 rmmod nvme_keyring 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1783352 ']' 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1783352 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1783352 ']' 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1783352 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:14.051 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1783352 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1783352' 00:22:14.312 killing process with pid 1783352 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1783352 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1783352 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.312 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.313 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.313 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.313 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:17.613 00:22:17.613 real 0m54.832s 00:22:17.613 user 2m50.038s 00:22:17.613 sys 0m11.848s 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.613 ************************************ 00:22:17.613 END TEST nvmf_perf_adq 00:22:17.613 ************************************ 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:17.613 ************************************ 00:22:17.613 START TEST nvmf_shutdown 00:22:17.613 ************************************ 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.613 * Looking for test storage... 00:22:17.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.613 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.614 --rc genhtml_branch_coverage=1 00:22:17.614 --rc genhtml_function_coverage=1 00:22:17.614 --rc genhtml_legend=1 00:22:17.614 --rc geninfo_all_blocks=1 00:22:17.614 --rc geninfo_unexecuted_blocks=1 00:22:17.614 00:22:17.614 ' 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.614 --rc genhtml_branch_coverage=1 00:22:17.614 --rc genhtml_function_coverage=1 00:22:17.614 --rc genhtml_legend=1 00:22:17.614 --rc geninfo_all_blocks=1 00:22:17.614 --rc geninfo_unexecuted_blocks=1 00:22:17.614 00:22:17.614 ' 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.614 --rc genhtml_branch_coverage=1 00:22:17.614 --rc genhtml_function_coverage=1 00:22:17.614 --rc genhtml_legend=1 00:22:17.614 --rc geninfo_all_blocks=1 00:22:17.614 --rc geninfo_unexecuted_blocks=1 00:22:17.614 00:22:17.614 ' 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.614 --rc genhtml_branch_coverage=1 00:22:17.614 --rc genhtml_function_coverage=1 00:22:17.614 --rc genhtml_legend=1 00:22:17.614 --rc geninfo_all_blocks=1 00:22:17.614 --rc geninfo_unexecuted_blocks=1 00:22:17.614 00:22:17.614 ' 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.614 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:17.875 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:17.876 ************************************ 00:22:17.876 START TEST nvmf_shutdown_tc1 00:22:17.876 ************************************ 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.876 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.021 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:26.022 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:26.022 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:26.022 Found net devices under 0000:31:00.0: cvl_0_0 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:26.022 Found net devices under 0000:31:00.1: cvl_0_1 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.022 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:22:26.022 00:22:26.022 --- 10.0.0.2 ping statistics --- 00:22:26.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.022 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:22:26.022 00:22:26.022 --- 10.0.0.1 ping statistics --- 00:22:26.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.022 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1790212 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1790212 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1790212 ']' 00:22:26.022 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.023 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:26.023 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.023 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:26.023 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.023 [2024-11-06 13:19:07.353487] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:22:26.023 [2024-11-06 13:19:07.353557] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.023 [2024-11-06 13:19:07.456918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.023 [2024-11-06 13:19:07.508898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.023 [2024-11-06 13:19:07.508951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.023 [2024-11-06 13:19:07.508959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.023 [2024-11-06 13:19:07.508966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.023 [2024-11-06 13:19:07.508972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.023 [2024-11-06 13:19:07.511320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.023 [2024-11-06 13:19:07.511480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.023 [2024-11-06 13:19:07.511641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.023 [2024-11-06 13:19:07.511642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:26.284 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:26.284 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:26.284 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.284 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.284 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.544 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.544 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.544 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.544 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.544 [2024-11-06 13:19:08.231550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.544 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.544 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:26.544 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.545 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.545 Malloc1 00:22:26.545 [2024-11-06 13:19:08.356374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.545 Malloc2 00:22:26.545 Malloc3 00:22:26.806 Malloc4 00:22:26.806 Malloc5 00:22:26.806 Malloc6 00:22:26.806 Malloc7 00:22:26.806 Malloc8 00:22:27.068 Malloc9 00:22:27.068 Malloc10 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1790487 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1790487 /var/tmp/bdevperf.sock 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1790487 ']' 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.068 { 00:22:27.068 "params": { 00:22:27.068 "name": "Nvme$subsystem", 00:22:27.068 "trtype": "$TEST_TRANSPORT", 00:22:27.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.068 "adrfam": "ipv4", 00:22:27.068 "trsvcid": "$NVMF_PORT", 00:22:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.068 "hdgst": ${hdgst:-false}, 00:22:27.068 "ddgst": ${ddgst:-false} 00:22:27.068 }, 00:22:27.068 "method": "bdev_nvme_attach_controller" 00:22:27.068 } 00:22:27.068 EOF 00:22:27.068 )") 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.068 { 00:22:27.068 "params": { 00:22:27.068 "name": "Nvme$subsystem", 00:22:27.068 "trtype": "$TEST_TRANSPORT", 00:22:27.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.068 "adrfam": "ipv4", 00:22:27.068 "trsvcid": "$NVMF_PORT", 00:22:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.068 "hdgst": ${hdgst:-false}, 00:22:27.068 "ddgst": ${ddgst:-false} 00:22:27.068 }, 00:22:27.068 "method": "bdev_nvme_attach_controller" 00:22:27.068 } 00:22:27.068 EOF 00:22:27.068 )") 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.068 { 00:22:27.068 "params": { 00:22:27.068 "name": "Nvme$subsystem", 00:22:27.068 "trtype": "$TEST_TRANSPORT", 00:22:27.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.068 "adrfam": "ipv4", 00:22:27.068 "trsvcid": "$NVMF_PORT", 00:22:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.068 "hdgst": ${hdgst:-false}, 00:22:27.068 "ddgst": ${ddgst:-false} 00:22:27.068 }, 00:22:27.068 "method": "bdev_nvme_attach_controller" 00:22:27.068 } 00:22:27.068 EOF 00:22:27.068 )") 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.068 { 00:22:27.068 "params": { 00:22:27.068 "name": "Nvme$subsystem", 00:22:27.068 "trtype": "$TEST_TRANSPORT", 00:22:27.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.068 "adrfam": "ipv4", 00:22:27.068 "trsvcid": "$NVMF_PORT", 00:22:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.068 "hdgst": ${hdgst:-false}, 00:22:27.068 "ddgst": ${ddgst:-false} 00:22:27.068 }, 00:22:27.068 "method": "bdev_nvme_attach_controller" 00:22:27.068 } 00:22:27.068 EOF 00:22:27.068 )") 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.068 { 00:22:27.068 "params": { 00:22:27.068 "name": "Nvme$subsystem", 00:22:27.068 "trtype": "$TEST_TRANSPORT", 00:22:27.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.068 "adrfam": "ipv4", 00:22:27.068 "trsvcid": "$NVMF_PORT", 00:22:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.068 "hdgst": ${hdgst:-false}, 00:22:27.068 "ddgst": ${ddgst:-false} 00:22:27.068 }, 00:22:27.068 "method": "bdev_nvme_attach_controller" 00:22:27.068 } 00:22:27.068 EOF 00:22:27.068 )") 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.068 { 00:22:27.068 "params": { 00:22:27.068 "name": "Nvme$subsystem", 00:22:27.068 "trtype": "$TEST_TRANSPORT", 00:22:27.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.068 "adrfam": "ipv4", 00:22:27.068 "trsvcid": "$NVMF_PORT", 00:22:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.068 "hdgst": ${hdgst:-false}, 00:22:27.068 "ddgst": ${ddgst:-false} 00:22:27.068 }, 00:22:27.068 "method": "bdev_nvme_attach_controller" 00:22:27.068 } 00:22:27.068 EOF 00:22:27.068 )") 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.068 [2024-11-06 13:19:08.880004] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:22:27.068 [2024-11-06 13:19:08.880077] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.068 { 00:22:27.068 "params": { 00:22:27.068 "name": "Nvme$subsystem", 00:22:27.068 "trtype": "$TEST_TRANSPORT", 00:22:27.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.068 "adrfam": "ipv4", 00:22:27.068 "trsvcid": "$NVMF_PORT", 00:22:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.068 "hdgst": ${hdgst:-false}, 00:22:27.068 "ddgst": ${ddgst:-false} 00:22:27.068 }, 00:22:27.068 "method": "bdev_nvme_attach_controller" 00:22:27.068 } 00:22:27.068 EOF 00:22:27.068 )") 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.068 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.068 { 00:22:27.068 "params": { 00:22:27.068 "name": "Nvme$subsystem", 00:22:27.068 "trtype": "$TEST_TRANSPORT", 00:22:27.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.068 "adrfam": "ipv4", 00:22:27.068 "trsvcid": "$NVMF_PORT", 00:22:27.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.068 "hdgst": ${hdgst:-false}, 00:22:27.068 "ddgst": ${ddgst:-false} 00:22:27.068 }, 00:22:27.068 "method": "bdev_nvme_attach_controller" 00:22:27.068 } 00:22:27.068 EOF 00:22:27.069 )") 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.069 { 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme$subsystem", 00:22:27.069 "trtype": "$TEST_TRANSPORT", 00:22:27.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "$NVMF_PORT", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.069 "hdgst": ${hdgst:-false}, 00:22:27.069 "ddgst": ${ddgst:-false} 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 } 00:22:27.069 EOF 00:22:27.069 )") 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.069 { 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme$subsystem", 00:22:27.069 "trtype": "$TEST_TRANSPORT", 00:22:27.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "$NVMF_PORT", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.069 "hdgst": ${hdgst:-false}, 00:22:27.069 "ddgst": ${ddgst:-false} 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 } 00:22:27.069 EOF 00:22:27.069 )") 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:27.069 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme1", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 },{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme2", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 },{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme3", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 },{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme4", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 },{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme5", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 },{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme6", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 },{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme7", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 },{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme8", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 },{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme9", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 },{ 00:22:27.069 "params": { 00:22:27.069 "name": "Nvme10", 00:22:27.069 "trtype": "tcp", 00:22:27.069 "traddr": "10.0.0.2", 00:22:27.069 "adrfam": "ipv4", 00:22:27.069 "trsvcid": "4420", 00:22:27.069 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:27.069 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:27.069 "hdgst": false, 00:22:27.069 "ddgst": false 00:22:27.069 }, 00:22:27.069 "method": "bdev_nvme_attach_controller" 00:22:27.069 }' 00:22:27.330 [2024-11-06 13:19:08.977524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.330 [2024-11-06 13:19:09.031234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.717 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:28.717 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:28.717 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:28.717 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.717 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:28.718 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.718 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1790487 00:22:28.718 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:28.718 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:29.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1790487 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1790212 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.661 { 00:22:29.661 "params": { 00:22:29.661 "name": "Nvme$subsystem", 00:22:29.661 "trtype": "$TEST_TRANSPORT", 00:22:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.661 "adrfam": "ipv4", 00:22:29.661 "trsvcid": "$NVMF_PORT", 00:22:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.661 "hdgst": ${hdgst:-false}, 00:22:29.661 "ddgst": ${ddgst:-false} 00:22:29.661 }, 00:22:29.661 "method": "bdev_nvme_attach_controller" 00:22:29.661 } 00:22:29.661 EOF 00:22:29.661 )") 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.661 { 00:22:29.661 "params": { 00:22:29.661 "name": "Nvme$subsystem", 00:22:29.661 "trtype": "$TEST_TRANSPORT", 00:22:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.661 "adrfam": "ipv4", 00:22:29.661 "trsvcid": "$NVMF_PORT", 00:22:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.661 "hdgst": ${hdgst:-false}, 00:22:29.661 "ddgst": ${ddgst:-false} 00:22:29.661 }, 00:22:29.661 "method": "bdev_nvme_attach_controller" 00:22:29.661 } 00:22:29.661 EOF 00:22:29.661 )") 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.661 { 00:22:29.661 "params": { 00:22:29.661 "name": "Nvme$subsystem", 00:22:29.661 "trtype": "$TEST_TRANSPORT", 00:22:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.661 "adrfam": "ipv4", 00:22:29.661 "trsvcid": "$NVMF_PORT", 00:22:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.661 "hdgst": ${hdgst:-false}, 00:22:29.661 "ddgst": ${ddgst:-false} 00:22:29.661 }, 00:22:29.661 "method": "bdev_nvme_attach_controller" 00:22:29.661 } 00:22:29.661 EOF 00:22:29.661 )") 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.661 { 00:22:29.661 "params": { 00:22:29.661 "name": "Nvme$subsystem", 00:22:29.661 "trtype": "$TEST_TRANSPORT", 00:22:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.661 "adrfam": "ipv4", 00:22:29.661 "trsvcid": "$NVMF_PORT", 00:22:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.661 "hdgst": ${hdgst:-false}, 00:22:29.661 "ddgst": ${ddgst:-false} 00:22:29.661 }, 00:22:29.661 "method": "bdev_nvme_attach_controller" 00:22:29.661 } 00:22:29.661 EOF 00:22:29.661 )") 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.661 { 00:22:29.661 "params": { 00:22:29.661 "name": "Nvme$subsystem", 00:22:29.661 "trtype": "$TEST_TRANSPORT", 00:22:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.661 "adrfam": "ipv4", 00:22:29.661 "trsvcid": "$NVMF_PORT", 00:22:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.661 "hdgst": ${hdgst:-false}, 00:22:29.661 "ddgst": ${ddgst:-false} 00:22:29.661 }, 00:22:29.661 "method": "bdev_nvme_attach_controller" 00:22:29.661 } 00:22:29.661 EOF 00:22:29.661 )") 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.661 { 00:22:29.661 "params": { 00:22:29.661 "name": "Nvme$subsystem", 00:22:29.661 "trtype": "$TEST_TRANSPORT", 00:22:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.661 "adrfam": "ipv4", 00:22:29.661 "trsvcid": "$NVMF_PORT", 00:22:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.661 "hdgst": ${hdgst:-false}, 00:22:29.661 "ddgst": ${ddgst:-false} 00:22:29.661 }, 00:22:29.661 "method": "bdev_nvme_attach_controller" 00:22:29.661 } 00:22:29.661 EOF 00:22:29.661 )") 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.661 [2024-11-06 13:19:11.389160] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:22:29.661 [2024-11-06 13:19:11.389216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790968 ] 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.661 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.661 { 00:22:29.661 "params": { 00:22:29.661 "name": "Nvme$subsystem", 00:22:29.661 "trtype": "$TEST_TRANSPORT", 00:22:29.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.661 "adrfam": "ipv4", 00:22:29.661 "trsvcid": "$NVMF_PORT", 00:22:29.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.662 "hdgst": ${hdgst:-false}, 00:22:29.662 "ddgst": ${ddgst:-false} 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 } 00:22:29.662 EOF 00:22:29.662 )") 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.662 { 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme$subsystem", 00:22:29.662 "trtype": "$TEST_TRANSPORT", 00:22:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "$NVMF_PORT", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.662 "hdgst": ${hdgst:-false}, 00:22:29.662 "ddgst": ${ddgst:-false} 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 } 00:22:29.662 EOF 00:22:29.662 )") 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.662 { 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme$subsystem", 00:22:29.662 "trtype": "$TEST_TRANSPORT", 00:22:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "$NVMF_PORT", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.662 "hdgst": ${hdgst:-false}, 00:22:29.662 "ddgst": ${ddgst:-false} 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 } 00:22:29.662 EOF 00:22:29.662 )") 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.662 { 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme$subsystem", 00:22:29.662 "trtype": "$TEST_TRANSPORT", 00:22:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "$NVMF_PORT", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.662 "hdgst": ${hdgst:-false}, 00:22:29.662 "ddgst": ${ddgst:-false} 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 } 00:22:29.662 EOF 00:22:29.662 )") 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:29.662 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme1", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 },{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme2", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 },{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme3", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 },{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme4", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 },{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme5", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 },{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme6", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 },{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme7", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 },{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme8", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 },{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme9", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 },{ 00:22:29.662 "params": { 00:22:29.662 "name": "Nvme10", 00:22:29.662 "trtype": "tcp", 00:22:29.662 "traddr": "10.0.0.2", 00:22:29.662 "adrfam": "ipv4", 00:22:29.662 "trsvcid": "4420", 00:22:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:29.662 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:29.662 "hdgst": false, 00:22:29.662 "ddgst": false 00:22:29.662 }, 00:22:29.662 "method": "bdev_nvme_attach_controller" 00:22:29.662 }' 00:22:29.662 [2024-11-06 13:19:11.480683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.662 [2024-11-06 13:19:11.516519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.049 Running I/O for 1 seconds... 00:22:32.250 1854.00 IOPS, 115.88 MiB/s 00:22:32.250 Latency(us) 00:22:32.250 [2024-11-06T12:19:14.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.250 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.250 Verification LBA range: start 0x0 length 0x400 00:22:32.250 Nvme1n1 : 1.19 214.45 13.40 0.00 0.00 290730.67 35607.89 267386.88 00:22:32.250 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.250 Verification LBA range: start 0x0 length 0x400 00:22:32.250 Nvme2n1 : 1.14 225.01 14.06 0.00 0.00 276792.96 18896.21 253405.87 00:22:32.250 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.250 Verification LBA range: start 0x0 length 0x400 00:22:32.250 Nvme3n1 : 1.12 228.52 14.28 0.00 0.00 267599.79 17803.95 267386.88 00:22:32.250 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.250 Verification LBA range: start 0x0 length 0x400 00:22:32.250 Nvme4n1 : 1.13 226.55 14.16 0.00 0.00 264712.96 19333.12 255153.49 00:22:32.250 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.250 Verification LBA range: start 0x0 length 0x400 00:22:32.250 Nvme5n1 : 1.13 226.96 14.19 0.00 0.00 259846.61 14636.37 251658.24 00:22:32.250 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.250 Verification LBA range: start 0x0 length 0x400 00:22:32.250 Nvme6n1 : 1.17 219.15 13.70 0.00 0.00 265081.60 19879.25 251658.24 00:22:32.250 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.250 Verification LBA range: start 0x0 length 0x400 00:22:32.251 Nvme7n1 : 1.17 273.45 17.09 0.00 0.00 208514.65 9448.11 260396.37 00:22:32.251 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.251 Verification LBA range: start 0x0 length 0x400 00:22:32.251 Nvme8n1 : 1.20 269.29 16.83 0.00 0.00 208025.72 1925.12 262144.00 00:22:32.251 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.251 Verification LBA range: start 0x0 length 0x400 00:22:32.251 Nvme9n1 : 1.21 262.68 16.42 0.00 0.00 209662.91 12069.55 255153.49 00:22:32.251 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:32.251 Verification LBA range: start 0x0 length 0x400 00:22:32.251 Nvme10n1 : 1.21 263.71 16.48 0.00 0.00 205559.51 6717.44 270882.13 00:22:32.251 [2024-11-06T12:19:14.153Z] =================================================================================================================== 00:22:32.251 [2024-11-06T12:19:14.153Z] Total : 2409.75 150.61 0.00 0.00 242222.63 1925.12 270882.13 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.251 rmmod nvme_tcp 00:22:32.251 rmmod nvme_fabrics 00:22:32.251 rmmod nvme_keyring 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1790212 ']' 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1790212 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 1790212 ']' 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 1790212 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:32.251 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1790212 00:22:32.512 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:32.512 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:32.512 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1790212' 00:22:32.512 killing process with pid 1790212 00:22:32.512 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 1790212 00:22:32.512 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 1790212 00:22:32.772 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.772 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.772 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.772 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:32.772 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:32.772 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.773 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.773 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.773 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.773 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.773 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.773 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.686 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.686 00:22:34.686 real 0m16.932s 00:22:34.686 user 0m33.647s 00:22:34.686 sys 0m7.021s 00:22:34.686 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:34.686 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.686 ************************************ 00:22:34.686 END TEST nvmf_shutdown_tc1 00:22:34.686 ************************************ 00:22:34.686 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:34.686 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:34.686 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:34.686 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.947 ************************************ 00:22:34.947 START TEST nvmf_shutdown_tc2 00:22:34.947 ************************************ 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:34.947 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:34.947 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.947 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:34.948 Found net devices under 0000:31:00.0: cvl_0_0 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:34.948 Found net devices under 0000:31:00.1: cvl_0_1 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.948 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:22:35.209 00:22:35.209 --- 10.0.0.2 ping statistics --- 00:22:35.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.209 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:22:35.209 00:22:35.209 --- 10.0.0.1 ping statistics --- 00:22:35.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.209 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1792088 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1792088 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1792088 ']' 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:35.209 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.209 [2024-11-06 13:19:17.034142] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:22:35.209 [2024-11-06 13:19:17.034194] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.469 [2024-11-06 13:19:17.125510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.470 [2024-11-06 13:19:17.159871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.470 [2024-11-06 13:19:17.159901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.470 [2024-11-06 13:19:17.159907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.470 [2024-11-06 13:19:17.159911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.470 [2024-11-06 13:19:17.159916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.470 [2024-11-06 13:19:17.161233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.470 [2024-11-06 13:19:17.161382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.470 [2024-11-06 13:19:17.161532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.470 [2024-11-06 13:19:17.161534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.041 [2024-11-06 13:19:17.879321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:36.041 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:36.300 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:36.300 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.300 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.300 Malloc1 00:22:36.300 [2024-11-06 13:19:17.989582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.300 Malloc2 00:22:36.300 Malloc3 00:22:36.300 Malloc4 00:22:36.300 Malloc5 00:22:36.300 Malloc6 00:22:36.300 Malloc7 00:22:36.561 Malloc8 00:22:36.561 Malloc9 00:22:36.561 Malloc10 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1792466 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1792466 /var/tmp/bdevperf.sock 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1792466 ']' 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:36.561 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.562 [2024-11-06 13:19:18.434881] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:22:36.562 [2024-11-06 13:19:18.434935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792466 ] 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:36.562 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:36.562 { 00:22:36.562 "params": { 00:22:36.562 "name": "Nvme$subsystem", 00:22:36.562 "trtype": "$TEST_TRANSPORT", 00:22:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:36.562 "adrfam": "ipv4", 00:22:36.562 "trsvcid": "$NVMF_PORT", 00:22:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:36.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:36.562 "hdgst": ${hdgst:-false}, 00:22:36.562 "ddgst": ${ddgst:-false} 00:22:36.562 }, 00:22:36.562 "method": "bdev_nvme_attach_controller" 00:22:36.562 } 00:22:36.562 EOF 00:22:36.562 )") 00:22:36.823 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:36.823 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:36.823 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:36.823 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:36.823 "params": { 00:22:36.823 "name": "Nvme1", 00:22:36.823 "trtype": "tcp", 00:22:36.823 "traddr": "10.0.0.2", 00:22:36.823 "adrfam": "ipv4", 00:22:36.823 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 },{ 00:22:36.824 "params": { 00:22:36.824 "name": "Nvme2", 00:22:36.824 "trtype": "tcp", 00:22:36.824 "traddr": "10.0.0.2", 00:22:36.824 "adrfam": "ipv4", 00:22:36.824 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 },{ 00:22:36.824 "params": { 00:22:36.824 "name": "Nvme3", 00:22:36.824 "trtype": "tcp", 00:22:36.824 "traddr": "10.0.0.2", 00:22:36.824 "adrfam": "ipv4", 00:22:36.824 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 },{ 00:22:36.824 "params": { 00:22:36.824 "name": "Nvme4", 00:22:36.824 "trtype": "tcp", 00:22:36.824 "traddr": "10.0.0.2", 00:22:36.824 "adrfam": "ipv4", 00:22:36.824 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 },{ 00:22:36.824 "params": { 00:22:36.824 "name": "Nvme5", 00:22:36.824 "trtype": "tcp", 00:22:36.824 "traddr": "10.0.0.2", 00:22:36.824 "adrfam": "ipv4", 00:22:36.824 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 },{ 00:22:36.824 "params": { 00:22:36.824 "name": "Nvme6", 00:22:36.824 "trtype": "tcp", 00:22:36.824 "traddr": "10.0.0.2", 00:22:36.824 "adrfam": "ipv4", 00:22:36.824 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 },{ 00:22:36.824 "params": { 00:22:36.824 "name": "Nvme7", 00:22:36.824 "trtype": "tcp", 00:22:36.824 "traddr": "10.0.0.2", 00:22:36.824 "adrfam": "ipv4", 00:22:36.824 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 },{ 00:22:36.824 "params": { 00:22:36.824 "name": "Nvme8", 00:22:36.824 "trtype": "tcp", 00:22:36.824 "traddr": "10.0.0.2", 00:22:36.824 "adrfam": "ipv4", 00:22:36.824 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 },{ 00:22:36.824 "params": { 00:22:36.824 "name": "Nvme9", 00:22:36.824 "trtype": "tcp", 00:22:36.824 "traddr": "10.0.0.2", 00:22:36.824 "adrfam": "ipv4", 00:22:36.824 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 },{ 00:22:36.824 "params": { 00:22:36.824 "name": "Nvme10", 00:22:36.824 "trtype": "tcp", 00:22:36.824 "traddr": "10.0.0.2", 00:22:36.824 "adrfam": "ipv4", 00:22:36.824 "trsvcid": "4420", 00:22:36.824 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:36.824 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:36.824 "hdgst": false, 00:22:36.824 "ddgst": false 00:22:36.824 }, 00:22:36.824 "method": "bdev_nvme_attach_controller" 00:22:36.824 }' 00:22:36.824 [2024-11-06 13:19:18.525466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.824 [2024-11-06 13:19:18.561824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.205 Running I/O for 10 seconds... 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.205 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:38.205 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:38.205 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:38.465 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1792466 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1792466 ']' 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1792466 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:38.725 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:38.986 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1792466 00:22:38.986 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:38.986 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:38.986 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1792466' 00:22:38.986 killing process with pid 1792466 00:22:38.986 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1792466 00:22:38.986 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1792466 00:22:38.986 2366.00 IOPS, 147.88 MiB/s [2024-11-06T12:19:20.888Z] Received shutdown signal, test time was about 1.037267 seconds 00:22:38.986 00:22:38.986 Latency(us) 00:22:38.986 [2024-11-06T12:19:20.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme1n1 : 0.99 259.83 16.24 0.00 0.00 243449.39 16165.55 242920.11 00:22:38.986 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme2n1 : 0.97 198.27 12.39 0.00 0.00 312401.35 22391.47 248162.99 00:22:38.986 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme3n1 : 0.97 262.68 16.42 0.00 0.00 231009.81 9120.43 217579.52 00:22:38.986 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme4n1 : 1.00 256.71 16.04 0.00 0.00 231919.47 13762.56 248162.99 00:22:38.986 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme5n1 : 0.99 258.68 16.17 0.00 0.00 225158.83 18568.53 225443.84 00:22:38.986 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme6n1 : 1.04 247.02 15.44 0.00 0.00 221549.87 16274.77 239424.85 00:22:38.986 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme7n1 : 0.98 260.58 16.29 0.00 0.00 213741.87 21299.20 256901.12 00:22:38.986 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme8n1 : 0.99 258.10 16.13 0.00 0.00 211250.99 18896.21 249910.61 00:22:38.986 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme9n1 : 0.96 199.05 12.44 0.00 0.00 266101.76 14308.69 248162.99 00:22:38.986 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.986 Verification LBA range: start 0x0 length 0x400 00:22:38.986 Nvme10n1 : 0.98 196.16 12.26 0.00 0.00 264554.10 17257.81 267386.88 00:22:38.986 [2024-11-06T12:19:20.888Z] =================================================================================================================== 00:22:38.986 [2024-11-06T12:19:20.888Z] Total : 2397.09 149.82 0.00 0.00 238959.26 9120.43 267386.88 00:22:39.247 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1792088 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:40.187 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:40.187 rmmod nvme_tcp 00:22:40.187 rmmod nvme_fabrics 00:22:40.187 rmmod nvme_keyring 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1792088 ']' 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1792088 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1792088 ']' 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1792088 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1792088 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1792088' 00:22:40.187 killing process with pid 1792088 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1792088 00:22:40.187 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1792088 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.448 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:42.991 00:22:42.991 real 0m7.780s 00:22:42.991 user 0m23.228s 00:22:42.991 sys 0m1.330s 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.991 ************************************ 00:22:42.991 END TEST nvmf_shutdown_tc2 00:22:42.991 ************************************ 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.991 ************************************ 00:22:42.991 START TEST nvmf_shutdown_tc3 00:22:42.991 ************************************ 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.991 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:42.992 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:42.992 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:42.992 Found net devices under 0000:31:00.0: cvl_0_0 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:42.992 Found net devices under 0000:31:00.1: cvl_0_1 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:22:42.992 00:22:42.992 --- 10.0.0.2 ping statistics --- 00:22:42.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.992 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:22:42.992 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:22:42.992 00:22:42.992 --- 10.0.0.1 ping statistics --- 00:22:42.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.992 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1793868 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1793868 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1793868 ']' 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:42.993 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.254 [2024-11-06 13:19:24.935024] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:22:43.254 [2024-11-06 13:19:24.935106] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.254 [2024-11-06 13:19:25.035124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.254 [2024-11-06 13:19:25.068968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.254 [2024-11-06 13:19:25.069010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.254 [2024-11-06 13:19:25.069016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.254 [2024-11-06 13:19:25.069021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.254 [2024-11-06 13:19:25.069025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.254 [2024-11-06 13:19:25.070381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.254 [2024-11-06 13:19:25.070533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.254 [2024-11-06 13:19:25.070683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.254 [2024-11-06 13:19:25.070685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:43.824 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:43.824 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.085 [2024-11-06 13:19:25.772977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.085 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.085 Malloc1 00:22:44.085 [2024-11-06 13:19:25.880559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.085 Malloc2 00:22:44.085 Malloc3 00:22:44.085 Malloc4 00:22:44.345 Malloc5 00:22:44.345 Malloc6 00:22:44.345 Malloc7 00:22:44.345 Malloc8 00:22:44.345 Malloc9 00:22:44.345 Malloc10 00:22:44.345 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.345 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:44.345 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.345 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1794082 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1794082 /var/tmp/bdevperf.sock 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1794082 ']' 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.606 { 00:22:44.606 "params": { 00:22:44.606 "name": "Nvme$subsystem", 00:22:44.606 "trtype": "$TEST_TRANSPORT", 00:22:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.606 "adrfam": "ipv4", 00:22:44.606 "trsvcid": "$NVMF_PORT", 00:22:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.606 "hdgst": ${hdgst:-false}, 00:22:44.606 "ddgst": ${ddgst:-false} 00:22:44.606 }, 00:22:44.606 "method": "bdev_nvme_attach_controller" 00:22:44.606 } 00:22:44.606 EOF 00:22:44.606 )") 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.606 { 00:22:44.606 "params": { 00:22:44.606 "name": "Nvme$subsystem", 00:22:44.606 "trtype": "$TEST_TRANSPORT", 00:22:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.606 "adrfam": "ipv4", 00:22:44.606 "trsvcid": "$NVMF_PORT", 00:22:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.606 "hdgst": ${hdgst:-false}, 00:22:44.606 "ddgst": ${ddgst:-false} 00:22:44.606 }, 00:22:44.606 "method": "bdev_nvme_attach_controller" 00:22:44.606 } 00:22:44.606 EOF 00:22:44.606 )") 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.606 { 00:22:44.606 "params": { 00:22:44.606 "name": "Nvme$subsystem", 00:22:44.606 "trtype": "$TEST_TRANSPORT", 00:22:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.606 "adrfam": "ipv4", 00:22:44.606 "trsvcid": "$NVMF_PORT", 00:22:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.606 "hdgst": ${hdgst:-false}, 00:22:44.606 "ddgst": ${ddgst:-false} 00:22:44.606 }, 00:22:44.606 "method": "bdev_nvme_attach_controller" 00:22:44.606 } 00:22:44.606 EOF 00:22:44.606 )") 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.606 { 00:22:44.606 "params": { 00:22:44.606 "name": "Nvme$subsystem", 00:22:44.606 "trtype": "$TEST_TRANSPORT", 00:22:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.606 "adrfam": "ipv4", 00:22:44.606 "trsvcid": "$NVMF_PORT", 00:22:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.606 "hdgst": ${hdgst:-false}, 00:22:44.606 "ddgst": ${ddgst:-false} 00:22:44.606 }, 00:22:44.606 "method": "bdev_nvme_attach_controller" 00:22:44.606 } 00:22:44.606 EOF 00:22:44.606 )") 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.606 { 00:22:44.606 "params": { 00:22:44.606 "name": "Nvme$subsystem", 00:22:44.606 "trtype": "$TEST_TRANSPORT", 00:22:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.606 "adrfam": "ipv4", 00:22:44.606 "trsvcid": "$NVMF_PORT", 00:22:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.606 "hdgst": ${hdgst:-false}, 00:22:44.606 "ddgst": ${ddgst:-false} 00:22:44.606 }, 00:22:44.606 "method": "bdev_nvme_attach_controller" 00:22:44.606 } 00:22:44.606 EOF 00:22:44.606 )") 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.606 { 00:22:44.606 "params": { 00:22:44.606 "name": "Nvme$subsystem", 00:22:44.606 "trtype": "$TEST_TRANSPORT", 00:22:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.606 "adrfam": "ipv4", 00:22:44.606 "trsvcid": "$NVMF_PORT", 00:22:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.606 "hdgst": ${hdgst:-false}, 00:22:44.606 "ddgst": ${ddgst:-false} 00:22:44.606 }, 00:22:44.606 "method": "bdev_nvme_attach_controller" 00:22:44.606 } 00:22:44.606 EOF 00:22:44.606 )") 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.606 [2024-11-06 13:19:26.335600] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:22:44.606 [2024-11-06 13:19:26.335653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794082 ] 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.606 { 00:22:44.606 "params": { 00:22:44.606 "name": "Nvme$subsystem", 00:22:44.606 "trtype": "$TEST_TRANSPORT", 00:22:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.606 "adrfam": "ipv4", 00:22:44.606 "trsvcid": "$NVMF_PORT", 00:22:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.606 "hdgst": ${hdgst:-false}, 00:22:44.606 "ddgst": ${ddgst:-false} 00:22:44.606 }, 00:22:44.606 "method": "bdev_nvme_attach_controller" 00:22:44.606 } 00:22:44.606 EOF 00:22:44.606 )") 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.606 { 00:22:44.606 "params": { 00:22:44.606 "name": "Nvme$subsystem", 00:22:44.606 "trtype": "$TEST_TRANSPORT", 00:22:44.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.606 "adrfam": "ipv4", 00:22:44.606 "trsvcid": "$NVMF_PORT", 00:22:44.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.606 "hdgst": ${hdgst:-false}, 00:22:44.606 "ddgst": ${ddgst:-false} 00:22:44.606 }, 00:22:44.606 "method": "bdev_nvme_attach_controller" 00:22:44.606 } 00:22:44.606 EOF 00:22:44.606 )") 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.606 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.607 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.607 { 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme$subsystem", 00:22:44.607 "trtype": "$TEST_TRANSPORT", 00:22:44.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "$NVMF_PORT", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.607 "hdgst": ${hdgst:-false}, 00:22:44.607 "ddgst": ${ddgst:-false} 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 } 00:22:44.607 EOF 00:22:44.607 )") 00:22:44.607 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.607 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.607 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.607 { 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme$subsystem", 00:22:44.607 "trtype": "$TEST_TRANSPORT", 00:22:44.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "$NVMF_PORT", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.607 "hdgst": ${hdgst:-false}, 00:22:44.607 "ddgst": ${ddgst:-false} 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 } 00:22:44.607 EOF 00:22:44.607 )") 00:22:44.607 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:44.607 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:44.607 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:44.607 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme1", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 },{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme2", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 },{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme3", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 },{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme4", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 },{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme5", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 },{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme6", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 },{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme7", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 },{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme8", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 },{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme9", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 },{ 00:22:44.607 "params": { 00:22:44.607 "name": "Nvme10", 00:22:44.607 "trtype": "tcp", 00:22:44.607 "traddr": "10.0.0.2", 00:22:44.607 "adrfam": "ipv4", 00:22:44.607 "trsvcid": "4420", 00:22:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:44.607 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:44.607 "hdgst": false, 00:22:44.607 "ddgst": false 00:22:44.607 }, 00:22:44.607 "method": "bdev_nvme_attach_controller" 00:22:44.607 }' 00:22:44.607 [2024-11-06 13:19:26.427249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.607 [2024-11-06 13:19:26.463827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.989 Running I/O for 10 seconds... 00:22:45.989 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.989 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:45.989 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:45.989 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.989 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.249 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.249 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:46.249 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:46.249 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:46.509 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1793868 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1793868 ']' 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1793868 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:46.769 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1793868 00:22:47.044 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:47.044 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:47.044 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1793868' 00:22:47.044 killing process with pid 1793868 00:22:47.044 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 1793868 00:22:47.044 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 1793868 00:22:47.044 [2024-11-06 13:19:28.709495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.044 [2024-11-06 13:19:28.709542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.044 [2024-11-06 13:19:28.709548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.044 [2024-11-06 13:19:28.709553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.044 [2024-11-06 13:19:28.709558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.044 [2024-11-06 13:19:28.709564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.044 [2024-11-06 13:19:28.709569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.044 [2024-11-06 13:19:28.709573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.044 [2024-11-06 13:19:28.709578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.709846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d6d10 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.045 [2024-11-06 13:19:28.711250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.711407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d470 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.046 [2024-11-06 13:19:28.712936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.712984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d940 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.713939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.713962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.713971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.713976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.713981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.713987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.713991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.713997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.714264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77de30 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.047 [2024-11-06 13:19:28.715456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.715688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e7d0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.048 [2024-11-06 13:19:28.716678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.716812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77eca0 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.717999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.049 [2024-11-06 13:19:28.718057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.718118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.050 [2024-11-06 13:19:28.720029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.050 [2024-11-06 13:19:28.720635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.050 [2024-11-06 13:19:28.720645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.720984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.720994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.051 [2024-11-06 13:19:28.721735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.051 [2024-11-06 13:19:28.721899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.051 [2024-11-06 13:19:28.721907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.721916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.721923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.721933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.721940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.721949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.721957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.721966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.721973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.721984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.721992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.052 [2024-11-06 13:19:28.722577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.052 [2024-11-06 13:19:28.722586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.053 [2024-11-06 13:19:28.722854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.722878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:47.053 [2024-11-06 13:19:28.723041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.723056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.723066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.723073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.723081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.723089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.723097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.723104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.723113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9d1b0 is same with the state(6) to be set 00:22:47.053 [2024-11-06 13:19:28.723138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.723147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.723156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.723163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.723171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.723179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.723187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.723194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.723202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9b10 is same with the state(6) to be set 00:22:47.053 [2024-11-06 13:19:28.723224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.727462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.053 [2024-11-06 13:19:28.727482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.053 [2024-11-06 13:19:28.727488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.053 [2024-11-06 13:19:28.727498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7480 is same with the state(6) to be set 00:22:47.053 [2024-11-06 13:19:28.736787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.736826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.736837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.736848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.736859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.736869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.736879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.736888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416630 is same with the state(6) to be set 00:22:47.053 [2024-11-06 13:19:28.736959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.736973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.736981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.736989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.736999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.737007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.053 [2024-11-06 13:19:28.737016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.053 [2024-11-06 13:19:28.737024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240a250 is same with the state(6) to be set 00:22:47.054 [2024-11-06 13:19:28.737056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23edfb0 is same with the state(6) to be set 00:22:47.054 [2024-11-06 13:19:28.737157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ef860 is same with the state(6) to be set 00:22:47.054 [2024-11-06 13:19:28.737250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240a070 is same with the state(6) to be set 00:22:47.054 [2024-11-06 13:19:28.737345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f90dd0 is same with the state(6) to be set 00:22:47.054 [2024-11-06 13:19:28.737435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f90fd0 is same with the state(6) to be set 00:22:47.054 [2024-11-06 13:19:28.737524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.054 [2024-11-06 13:19:28.737583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9cd30 is same with the state(6) to be set 00:22:47.054 [2024-11-06 13:19:28.737639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.054 [2024-11-06 13:19:28.737876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.054 [2024-11-06 13:19:28.737884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.737894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.737901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.737911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.737919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.737931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.737938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.737948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.737957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.737966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.737974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.737984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.737991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.055 [2024-11-06 13:19:28.738599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.055 [2024-11-06 13:19:28.738609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.738801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.738809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.741586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:47.056 [2024-11-06 13:19:28.741625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f90dd0 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.741668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9d1b0 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.741688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9b10 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.741712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2416630 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.741734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a250 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.741757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23edfb0 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.741774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ef860 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.741789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a070 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.741807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f90fd0 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.741826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9cd30 (9): Bad file descriptor 00:22:47.056 [2024-11-06 13:19:28.743575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:47.056 [2024-11-06 13:19:28.743601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:47.056 [2024-11-06 13:19:28.745273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.056 [2024-11-06 13:19:28.745320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f90dd0 with addr=10.0.0.2, port=4420 00:22:47.056 [2024-11-06 13:19:28.745336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f90dd0 is same with the state(6) to be set 00:22:47.056 [2024-11-06 13:19:28.745695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.056 [2024-11-06 13:19:28.745712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240a250 with addr=10.0.0.2, port=4420 00:22:47.056 [2024-11-06 13:19:28.745722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240a250 is same with the state(6) to be set 00:22:47.056 [2024-11-06 13:19:28.746215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.056 [2024-11-06 13:19:28.746260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9d1b0 with addr=10.0.0.2, port=4420 00:22:47.056 [2024-11-06 13:19:28.746273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9d1b0 is same with the state(6) to be set 00:22:47.056 [2024-11-06 13:19:28.746662] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.056 [2024-11-06 13:19:28.746713] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.056 [2024-11-06 13:19:28.746764] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.056 [2024-11-06 13:19:28.746813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.746829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.746849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.746860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.746873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.746883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.746901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.746910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.746923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.746932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.746945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.746954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.746967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.746977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.746989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.746999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.056 [2024-11-06 13:19:28.747210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.056 [2024-11-06 13:19:28.747220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.747979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.747988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.748002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.748012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.748023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.748033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.748045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.748056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.748068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.057 [2024-11-06 13:19:28.748076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.057 [2024-11-06 13:19:28.748089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.748099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.748111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.748120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.748132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.748142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.748153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.748163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.748174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.748184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.748196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.748206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.748218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.748228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.748331] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.058 [2024-11-06 13:19:28.748438] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.058 [2024-11-06 13:19:28.748472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f90dd0 (9): Bad file descriptor 00:22:47.058 [2024-11-06 13:19:28.748487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a250 (9): Bad file descriptor 00:22:47.058 [2024-11-06 13:19:28.748503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9d1b0 (9): Bad file descriptor 00:22:47.058 [2024-11-06 13:19:28.750122] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:47.058 [2024-11-06 13:19:28.750157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:47.058 [2024-11-06 13:19:28.750186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:47.058 [2024-11-06 13:19:28.750195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:47.058 [2024-11-06 13:19:28.750207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:47.058 [2024-11-06 13:19:28.750218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:47.058 [2024-11-06 13:19:28.750228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:47.058 [2024-11-06 13:19:28.750237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:47.058 [2024-11-06 13:19:28.750246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:47.058 [2024-11-06 13:19:28.750253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:47.058 [2024-11-06 13:19:28.750263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:47.058 [2024-11-06 13:19:28.750270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:47.058 [2024-11-06 13:19:28.750280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:47.058 [2024-11-06 13:19:28.750287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:47.058 [2024-11-06 13:19:28.750735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.058 [2024-11-06 13:19:28.750761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2416630 with addr=10.0.0.2, port=4420 00:22:47.058 [2024-11-06 13:19:28.750771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416630 is same with the state(6) to be set 00:22:47.058 [2024-11-06 13:19:28.751137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2416630 (9): Bad file descriptor 00:22:47.058 [2024-11-06 13:19:28.751192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:47.058 [2024-11-06 13:19:28.751201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:47.058 [2024-11-06 13:19:28.751210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:47.058 [2024-11-06 13:19:28.751218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:47.058 [2024-11-06 13:19:28.751778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.751990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.751998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.752008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.752016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.752026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.752034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.752045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.752053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.752063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.752071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.752081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.752090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.752100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.058 [2024-11-06 13:19:28.752109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.058 [2024-11-06 13:19:28.752119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.059 [2024-11-06 13:19:28.752863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.059 [2024-11-06 13:19:28.752871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.752882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.752890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.752900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.752908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.752918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.752926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.752935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.752943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.752953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.752962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.752972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.752981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.752989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a2350 is same with the state(6) to be set 00:22:47.060 [2024-11-06 13:19:28.754267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.060 [2024-11-06 13:19:28.754887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.060 [2024-11-06 13:19:28.754896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.754905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.754914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.754923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.754932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.754942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.754950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.754960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.754968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.754979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.754988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.754998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.755476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.755485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2390860 is same with the state(6) to be set 00:22:47.061 [2024-11-06 13:19:28.756771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.756787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.756800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.756809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.756820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.756830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.756841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.756851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.756861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.756870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.756881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.756889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.756899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.756907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.756917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.756925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.061 [2024-11-06 13:19:28.756935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.061 [2024-11-06 13:19:28.756942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.756952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.756960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.756969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.756978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.756987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.756999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.062 [2024-11-06 13:19:28.757585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.062 [2024-11-06 13:19:28.757595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.757965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.757973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239d940 is same with the state(6) to be set 00:22:47.063 [2024-11-06 13:19:28.759244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.063 [2024-11-06 13:19:28.759604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.063 [2024-11-06 13:19:28.759613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.759982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.759990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.064 [2024-11-06 13:19:28.760354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.064 [2024-11-06 13:19:28.760363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.760375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.760383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.760393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.760401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.760412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.760420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.760430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.760439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.760449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a1850 is same with the state(6) to be set 00:22:47.065 [2024-11-06 13:19:28.761714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.761983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.761993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.065 [2024-11-06 13:19:28.762376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.065 [2024-11-06 13:19:28.762386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.762918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.762927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a4340 is same with the state(6) to be set 00:22:47.066 [2024-11-06 13:19:28.764199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.764215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.764228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.764238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.764250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.764260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.764272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.764281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.764292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.764300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.764310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.764318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.764329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.764340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.764350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.066 [2024-11-06 13:19:28.764359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.066 [2024-11-06 13:19:28.764369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.764982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.764990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.765000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.765009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.765018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.765027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.765039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.765047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.765059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.067 [2024-11-06 13:19:28.765067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.067 [2024-11-06 13:19:28.765078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.068 [2024-11-06 13:19:28.765405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.068 [2024-11-06 13:19:28.765414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5080 is same with the state(6) to be set 00:22:47.068 [2024-11-06 13:19:28.766965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:47.068 [2024-11-06 13:19:28.766993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:47.068 [2024-11-06 13:19:28.767003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:47.068 [2024-11-06 13:19:28.767013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:47.068 [2024-11-06 13:19:28.767098] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:47.068 [2024-11-06 13:19:28.767112] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:47.068 [2024-11-06 13:19:28.767187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:47.068 task offset: 25728 on job bdev=Nvme5n1 fails 00:22:47.068 00:22:47.068 Latency(us) 00:22:47.068 [2024-11-06T12:19:28.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.068 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme1n1 ended in about 0.97 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme1n1 : 0.97 197.04 12.32 65.68 0.00 240924.80 22609.92 249910.61 00:22:47.068 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme2n1 ended in about 0.99 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme2n1 : 0.99 129.88 8.12 64.94 0.00 318717.72 15291.73 267386.88 00:22:47.068 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme3n1 ended in about 0.99 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme3n1 : 0.99 194.33 12.15 64.78 0.00 234823.68 31020.37 234181.97 00:22:47.068 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme4n1 ended in about 0.99 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme4n1 : 0.99 193.84 12.12 64.61 0.00 230746.03 18350.08 244667.73 00:22:47.068 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme5n1 : 0.97 197.62 12.35 65.87 0.00 221282.35 19223.89 270882.13 00:22:47.068 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme6n1 ended in about 0.98 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme6n1 : 0.98 195.68 12.23 65.23 0.00 218931.20 4669.44 246415.36 00:22:47.068 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme7n1 ended in about 0.99 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme7n1 : 0.99 201.42 12.59 64.45 0.00 210550.33 10431.15 249910.61 00:22:47.068 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme8n1 ended in about 0.97 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme8n1 : 0.97 197.37 12.34 65.79 0.00 207375.15 18786.99 246415.36 00:22:47.068 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme9n1 ended in about 1.00 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme9n1 : 1.00 128.59 8.04 64.29 0.00 277816.04 22828.37 255153.49 00:22:47.068 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.068 Job: Nvme10n1 ended in about 1.00 seconds with error 00:22:47.068 Verification LBA range: start 0x0 length 0x400 00:22:47.068 Nvme10n1 : 1.00 128.27 8.02 64.13 0.00 272324.27 20643.84 270882.13 00:22:47.068 [2024-11-06T12:19:28.970Z] =================================================================================================================== 00:22:47.068 [2024-11-06T12:19:28.970Z] Total : 1764.03 110.25 649.78 0.00 239499.72 4669.44 270882.13 00:22:47.068 [2024-11-06 13:19:28.792157] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:47.068 [2024-11-06 13:19:28.792201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:47.068 [2024-11-06 13:19:28.792675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.068 [2024-11-06 13:19:28.792695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f90fd0 with addr=10.0.0.2, port=4420 00:22:47.068 [2024-11-06 13:19:28.792706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f90fd0 is same with the state(6) to be set 00:22:47.068 [2024-11-06 13:19:28.793192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.068 [2024-11-06 13:19:28.793233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9cd30 with addr=10.0.0.2, port=4420 00:22:47.068 [2024-11-06 13:19:28.793245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9cd30 is same with the state(6) to be set 00:22:47.068 [2024-11-06 13:19:28.793616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.068 [2024-11-06 13:19:28.793635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9b10 with addr=10.0.0.2, port=4420 00:22:47.068 [2024-11-06 13:19:28.793643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b9b10 is same with the state(6) to be set 00:22:47.068 [2024-11-06 13:19:28.793974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.068 [2024-11-06 13:19:28.794013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240a070 with addr=10.0.0.2, port=4420 00:22:47.068 [2024-11-06 13:19:28.794026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240a070 is same with the state(6) to be set 00:22:47.069 [2024-11-06 13:19:28.795699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:47.069 [2024-11-06 13:19:28.795717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:47.069 [2024-11-06 13:19:28.795728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:47.069 [2024-11-06 13:19:28.795738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:47.069 [2024-11-06 13:19:28.796144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.069 [2024-11-06 13:19:28.796162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ef860 with addr=10.0.0.2, port=4420 00:22:47.069 [2024-11-06 13:19:28.796170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ef860 is same with the state(6) to be set 00:22:47.069 [2024-11-06 13:19:28.796535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.069 [2024-11-06 13:19:28.796546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23edfb0 with addr=10.0.0.2, port=4420 00:22:47.069 [2024-11-06 13:19:28.796554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23edfb0 is same with the state(6) to be set 00:22:47.069 [2024-11-06 13:19:28.796566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f90fd0 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.796578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9cd30 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.796588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b9b10 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.796597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a070 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.796636] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:47.069 [2024-11-06 13:19:28.796649] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:47.069 [2024-11-06 13:19:28.796660] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:47.069 [2024-11-06 13:19:28.796672] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:47.069 [2024-11-06 13:19:28.797061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.069 [2024-11-06 13:19:28.797076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f9d1b0 with addr=10.0.0.2, port=4420 00:22:47.069 [2024-11-06 13:19:28.797084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f9d1b0 is same with the state(6) to be set 00:22:47.069 [2024-11-06 13:19:28.797317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.069 [2024-11-06 13:19:28.797328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240a250 with addr=10.0.0.2, port=4420 00:22:47.069 [2024-11-06 13:19:28.797335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240a250 is same with the state(6) to be set 00:22:47.069 [2024-11-06 13:19:28.797656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.069 [2024-11-06 13:19:28.797667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f90dd0 with addr=10.0.0.2, port=4420 00:22:47.069 [2024-11-06 13:19:28.797674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f90dd0 is same with the state(6) to be set 00:22:47.069 [2024-11-06 13:19:28.797870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.069 [2024-11-06 13:19:28.797883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2416630 with addr=10.0.0.2, port=4420 00:22:47.069 [2024-11-06 13:19:28.797891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2416630 is same with the state(6) to be set 00:22:47.069 [2024-11-06 13:19:28.797901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ef860 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.797911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23edfb0 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.797920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.797928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.797938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.797946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:47.069 [2024-11-06 13:19:28.797954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.797961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.797969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.797976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:47.069 [2024-11-06 13:19:28.797983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.797990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.797998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.798005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:47.069 [2024-11-06 13:19:28.798012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.798019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.798026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.798033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:47.069 [2024-11-06 13:19:28.798115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9d1b0 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.798128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a250 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.798137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f90dd0 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.798148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2416630 (9): Bad file descriptor 00:22:47.069 [2024-11-06 13:19:28.798161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.798168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.798175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.798182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:47.069 [2024-11-06 13:19:28.798190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.798196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.798203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.798209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:47.069 [2024-11-06 13:19:28.798235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.798243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.798250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.798257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:47.069 [2024-11-06 13:19:28.798265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.798271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.798278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.798285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:47.069 [2024-11-06 13:19:28.798293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.798300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.798307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.798313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:47.069 [2024-11-06 13:19:28.798323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:47.069 [2024-11-06 13:19:28.798330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:47.069 [2024-11-06 13:19:28.798337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:47.069 [2024-11-06 13:19:28.798343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:47.330 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1794082 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1794082 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1794082 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:48.270 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:48.271 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.271 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:48.271 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.271 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:48.271 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.271 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.271 rmmod nvme_tcp 00:22:48.271 rmmod nvme_fabrics 00:22:48.271 rmmod nvme_keyring 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1793868 ']' 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1793868 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1793868 ']' 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1793868 00:22:48.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1793868) - No such process 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1793868 is not found' 00:22:48.271 Process with pid 1793868 is not found 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.271 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.816 00:22:50.816 real 0m7.693s 00:22:50.816 user 0m18.360s 00:22:50.816 sys 0m1.254s 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.816 ************************************ 00:22:50.816 END TEST nvmf_shutdown_tc3 00:22:50.816 ************************************ 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.816 ************************************ 00:22:50.816 START TEST nvmf_shutdown_tc4 00:22:50.816 ************************************ 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:50.816 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:50.816 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.816 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:50.817 Found net devices under 0000:31:00.0: cvl_0_0 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:50.817 Found net devices under 0000:31:00.1: cvl_0_1 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:22:50.817 00:22:50.817 --- 10.0.0.2 ping statistics --- 00:22:50.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.817 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:50.817 00:22:50.817 --- 10.0.0.1 ping statistics --- 00:22:50.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.817 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1795452 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1795452 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 1795452 ']' 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:50.817 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:50.817 [2024-11-06 13:19:32.703962] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:22:50.817 [2024-11-06 13:19:32.704029] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.077 [2024-11-06 13:19:32.799988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.077 [2024-11-06 13:19:32.833975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.077 [2024-11-06 13:19:32.834004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.077 [2024-11-06 13:19:32.834010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.077 [2024-11-06 13:19:32.834015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.077 [2024-11-06 13:19:32.834019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.077 [2024-11-06 13:19:32.835620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.077 [2024-11-06 13:19:32.835799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.077 [2024-11-06 13:19:32.836066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.077 [2024-11-06 13:19:32.836066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.648 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:51.648 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:22:51.648 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.648 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.648 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.648 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.648 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.648 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.648 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.648 [2024-11-06 13:19:33.546553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.909 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:51.909 Malloc1 00:22:51.909 [2024-11-06 13:19:33.665145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.909 Malloc2 00:22:51.909 Malloc3 00:22:51.909 Malloc4 00:22:51.909 Malloc5 00:22:52.167 Malloc6 00:22:52.167 Malloc7 00:22:52.167 Malloc8 00:22:52.167 Malloc9 00:22:52.167 Malloc10 00:22:52.167 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.167 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:52.167 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.167 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:52.167 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1795830 00:22:52.167 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:52.167 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:52.427 [2024-11-06 13:19:34.143306] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1795452 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1795452 ']' 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1795452 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1795452 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1795452' 00:22:57.720 killing process with pid 1795452 00:22:57.720 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 1795452 00:22:57.721 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 1795452 00:22:57.721 [2024-11-06 13:19:39.138524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2a90 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2a90 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2a90 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2a90 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2a90 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2a90 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2a90 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2a90 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2f60 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2f60 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2f60 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2f60 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2f60 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.138922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2f60 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.139281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fad10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.139305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fad10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.139608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a25c0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.139631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a25c0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.139637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a25c0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.139642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a25c0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.139648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a25c0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde10 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe300 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe300 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.143843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe300 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.144100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fe7d0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.144355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd940 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.144375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd940 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.144381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd940 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.144387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd940 is same with the state(6) to be set 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 starting I/O failed: -6 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 starting I/O failed: -6 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 starting I/O failed: -6 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 starting I/O failed: -6 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 starting I/O failed: -6 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 starting I/O failed: -6 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 [2024-11-06 13:19:39.145265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.721 [2024-11-06 13:19:39.145469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fca70 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.145487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fca70 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.145492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fca70 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.145497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fca70 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.145502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fca70 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.145507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fca70 is same with the state(6) to be set 00:22:57.721 starting I/O failed: -6 00:22:57.721 [2024-11-06 13:19:39.145726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcf60 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.145740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcf60 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.145755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcf60 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.145760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcf60 is same with the state(6) to be set 00:22:57.721 starting I/O failed: -6 00:22:57.721 starting I/O failed: -6 00:22:57.721 [2024-11-06 13:19:39.145966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.145982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 starting I/O failed: -6 00:22:57.721 [2024-11-06 13:19:39.145986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 NVMe io qpair process completion error 00:22:57.721 [2024-11-06 13:19:39.145993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd450 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc5a0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc5a0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc5a0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc5a0 is same with the state(6) to be set 00:22:57.721 [2024-11-06 13:19:39.146375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc5a0 is same with the state(6) to be set 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 starting I/O failed: -6 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.721 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 [2024-11-06 13:19:39.147268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 [2024-11-06 13:19:39.147797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fbbc0 is same with the state(6) to be set 00:22:57.722 starting I/O failed: -6 00:22:57.722 [2024-11-06 13:19:39.147811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fbbc0 is same with the state(6) to be set 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 [2024-11-06 13:19:39.147816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fbbc0 is same with the state(6) to be set 00:22:57.722 [2024-11-06 13:19:39.147821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fbbc0 is same with the state(6) to be set 00:22:57.722 [2024-11-06 13:19:39.147825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fbbc0 is same with the state(6) to be set 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 [2024-11-06 13:19:39.148101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 [2024-11-06 13:19:39.149028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.722 Write completed with error (sct=0, sc=8) 00:22:57.722 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 [2024-11-06 13:19:39.150470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.723 NVMe io qpair process completion error 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 [2024-11-06 13:19:39.151729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 starting I/O failed: -6 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 Write completed with error (sct=0, sc=8) 00:22:57.723 [2024-11-06 13:19:39.152554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 [2024-11-06 13:19:39.153493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 [2024-11-06 13:19:39.154968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.724 NVMe io qpair process completion error 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 starting I/O failed: -6 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.724 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 [2024-11-06 13:19:39.156201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 [2024-11-06 13:19:39.157102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.725 starting I/O failed: -6 00:22:57.725 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 [2024-11-06 13:19:39.158026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 [2024-11-06 13:19:39.160208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.726 NVMe io qpair process completion error 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 [2024-11-06 13:19:39.161893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.726 starting I/O failed: -6 00:22:57.726 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 [2024-11-06 13:19:39.162712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 [2024-11-06 13:19:39.163661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.727 starting I/O failed: -6 00:22:57.727 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 [2024-11-06 13:19:39.165319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.728 NVMe io qpair process completion error 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 [2024-11-06 13:19:39.166642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 [2024-11-06 13:19:39.167574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 Write completed with error (sct=0, sc=8) 00:22:57.728 starting I/O failed: -6 00:22:57.729 [2024-11-06 13:19:39.168501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 [2024-11-06 13:19:39.172450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.729 NVMe io qpair process completion error 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 [2024-11-06 13:19:39.173672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 starting I/O failed: -6 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.729 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 [2024-11-06 13:19:39.174494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 [2024-11-06 13:19:39.175427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.730 Write completed with error (sct=0, sc=8) 00:22:57.730 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 [2024-11-06 13:19:39.176843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.731 NVMe io qpair process completion error 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 [2024-11-06 13:19:39.177990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 [2024-11-06 13:19:39.178899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.731 Write completed with error (sct=0, sc=8) 00:22:57.731 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 [2024-11-06 13:19:39.179821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 [2024-11-06 13:19:39.181608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.732 NVMe io qpair process completion error 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 [2024-11-06 13:19:39.182934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 starting I/O failed: -6 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.732 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 [2024-11-06 13:19:39.183864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 [2024-11-06 13:19:39.184759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.733 Write completed with error (sct=0, sc=8) 00:22:57.733 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 [2024-11-06 13:19:39.187338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.734 NVMe io qpair process completion error 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 [2024-11-06 13:19:39.188478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 [2024-11-06 13:19:39.189299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:57.734 starting I/O failed: -6 00:22:57.734 starting I/O failed: -6 00:22:57.734 starting I/O failed: -6 00:22:57.734 starting I/O failed: -6 00:22:57.734 starting I/O failed: -6 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.734 starting I/O failed: -6 00:22:57.734 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 [2024-11-06 13:19:39.190660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 starting I/O failed: -6 00:22:57.735 [2024-11-06 13:19:39.192321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:57.735 NVMe io qpair process completion error 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.735 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Write completed with error (sct=0, sc=8) 00:22:57.736 Initializing NVMe Controllers 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:57.736 Controller IO queue size 128, less than required. 00:22:57.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:57.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:57.736 Initialization complete. Launching workers. 00:22:57.736 ======================================================== 00:22:57.736 Latency(us) 00:22:57.736 Device Information : IOPS MiB/s Average min max 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1868.42 80.28 68525.32 874.93 121059.83 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1879.59 80.76 68136.75 852.71 127682.51 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1863.05 80.05 68769.67 888.02 150031.19 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1877.65 80.68 68274.15 737.35 131834.14 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1859.40 79.90 68884.00 518.20 133670.78 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1843.08 79.19 68846.23 630.67 119518.16 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1884.74 80.98 67343.60 697.32 118795.16 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1881.30 80.84 67487.48 898.92 118080.53 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1860.90 79.96 68265.76 639.70 119082.13 00:22:57.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1837.71 78.96 69153.19 925.79 122057.60 00:22:57.736 ======================================================== 00:22:57.736 Total : 18655.84 801.62 68364.66 518.20 150031.19 00:22:57.736 00:22:57.736 [2024-11-06 13:19:39.200893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17706b0 is same with the state(6) to be set 00:22:57.736 [2024-11-06 13:19:39.200942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771360 is same with the state(6) to be set 00:22:57.736 [2024-11-06 13:19:39.200973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1770380 is same with the state(6) to be set 00:22:57.736 [2024-11-06 13:19:39.201003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f9f0 is same with the state(6) to be set 00:22:57.736 [2024-11-06 13:19:39.201034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f390 is same with the state(6) to be set 00:22:57.736 [2024-11-06 13:19:39.201064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f6c0 is same with the state(6) to be set 00:22:57.736 [2024-11-06 13:19:39.201092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1770050 is same with the state(6) to be set 00:22:57.736 [2024-11-06 13:19:39.201122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17709e0 is same with the state(6) to be set 00:22:57.736 [2024-11-06 13:19:39.201151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1771540 is same with the state(6) to be set 00:22:57.736 [2024-11-06 13:19:39.201181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f060 is same with the state(6) to be set 00:22:57.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:57.736 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1795830 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1795830 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1795830 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.677 rmmod nvme_tcp 00:22:58.677 rmmod nvme_fabrics 00:22:58.677 rmmod nvme_keyring 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1795452 ']' 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1795452 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1795452 ']' 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1795452 00:22:58.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1795452) - No such process 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1795452 is not found' 00:22:58.677 Process with pid 1795452 is not found 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.677 13:19:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.223 00:23:01.223 real 0m10.301s 00:23:01.223 user 0m28.077s 00:23:01.223 sys 0m3.882s 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:01.223 ************************************ 00:23:01.223 END TEST nvmf_shutdown_tc4 00:23:01.223 ************************************ 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:01.223 00:23:01.223 real 0m43.302s 00:23:01.223 user 1m43.593s 00:23:01.223 sys 0m13.837s 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:01.223 ************************************ 00:23:01.223 END TEST nvmf_shutdown 00:23:01.223 ************************************ 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:01.223 ************************************ 00:23:01.223 START TEST nvmf_nsid 00:23:01.223 ************************************ 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:01.223 * Looking for test storage... 00:23:01.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:01.223 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:01.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.224 --rc genhtml_branch_coverage=1 00:23:01.224 --rc genhtml_function_coverage=1 00:23:01.224 --rc genhtml_legend=1 00:23:01.224 --rc geninfo_all_blocks=1 00:23:01.224 --rc geninfo_unexecuted_blocks=1 00:23:01.224 00:23:01.224 ' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:01.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.224 --rc genhtml_branch_coverage=1 00:23:01.224 --rc genhtml_function_coverage=1 00:23:01.224 --rc genhtml_legend=1 00:23:01.224 --rc geninfo_all_blocks=1 00:23:01.224 --rc geninfo_unexecuted_blocks=1 00:23:01.224 00:23:01.224 ' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:01.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.224 --rc genhtml_branch_coverage=1 00:23:01.224 --rc genhtml_function_coverage=1 00:23:01.224 --rc genhtml_legend=1 00:23:01.224 --rc geninfo_all_blocks=1 00:23:01.224 --rc geninfo_unexecuted_blocks=1 00:23:01.224 00:23:01.224 ' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:01.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.224 --rc genhtml_branch_coverage=1 00:23:01.224 --rc genhtml_function_coverage=1 00:23:01.224 --rc genhtml_legend=1 00:23:01.224 --rc geninfo_all_blocks=1 00:23:01.224 --rc geninfo_unexecuted_blocks=1 00:23:01.224 00:23:01.224 ' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.224 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:09.462 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:09.462 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:09.462 Found net devices under 0000:31:00.0: cvl_0_0 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.462 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:09.463 Found net devices under 0000:31:00.1: cvl_0_1 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:23:09.463 00:23:09.463 --- 10.0.0.2 ping statistics --- 00:23:09.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.463 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:23:09.463 00:23:09.463 --- 10.0.0.1 ping statistics --- 00:23:09.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.463 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1801220 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1801220 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1801220 ']' 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.463 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.463 [2024-11-06 13:19:50.531930] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:23:09.463 [2024-11-06 13:19:50.531996] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.463 [2024-11-06 13:19:50.635127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.463 [2024-11-06 13:19:50.686119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.463 [2024-11-06 13:19:50.686171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.463 [2024-11-06 13:19:50.686180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.463 [2024-11-06 13:19:50.686188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.463 [2024-11-06 13:19:50.686194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.463 [2024-11-06 13:19:50.686980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1801407 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e1322ea3-0866-4537-94e3-e78965ac5238 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=e6530354-43bd-406c-b24d-c92f307e5c2f 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=498024fa-da1c-4d81-8773-22c717778833 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.725 null0 00:23:09.725 null1 00:23:09.725 null2 00:23:09.725 [2024-11-06 13:19:51.471625] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:23:09.725 [2024-11-06 13:19:51.471692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801407 ] 00:23:09.725 [2024-11-06 13:19:51.472940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.725 [2024-11-06 13:19:51.497246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1801407 /var/tmp/tgt2.sock 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1801407 ']' 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:09.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.725 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.725 [2024-11-06 13:19:51.567179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.725 [2024-11-06 13:19:51.621061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.297 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:10.297 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:10.297 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:10.297 [2024-11-06 13:19:52.191345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.558 [2024-11-06 13:19:52.207539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:10.558 nvme0n1 nvme0n2 00:23:10.558 nvme1n1 00:23:10.558 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:10.558 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:10.558 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:11.944 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e1322ea3-0866-4537-94e3-e78965ac5238 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e1322ea30866453794e3e78965ac5238 00:23:12.887 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E1322EA30866453794E3E78965AC5238 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E1322EA30866453794E3E78965AC5238 == \E\1\3\2\2\E\A\3\0\8\6\6\4\5\3\7\9\4\E\3\E\7\8\9\6\5\A\C\5\2\3\8 ]] 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid e6530354-43bd-406c-b24d-c92f307e5c2f 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e653035443bd406cb24dc92f307e5c2f 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E653035443BD406CB24DC92F307E5C2F 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ E653035443BD406CB24DC92F307E5C2F == \E\6\5\3\0\3\5\4\4\3\B\D\4\0\6\C\B\2\4\D\C\9\2\F\3\0\7\E\5\C\2\F ]] 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 498024fa-da1c-4d81-8773-22c717778833 00:23:13.148 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:13.149 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:13.149 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:13.149 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:13.149 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:13.149 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=498024fada1c4d81877322c717778833 00:23:13.149 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 498024FADA1C4D81877322C717778833 00:23:13.149 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 498024FADA1C4D81877322C717778833 == \4\9\8\0\2\4\F\A\D\A\1\C\4\D\8\1\8\7\7\3\2\2\C\7\1\7\7\7\8\8\3\3 ]] 00:23:13.149 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1801407 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1801407 ']' 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1801407 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1801407 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1801407' 00:23:13.410 killing process with pid 1801407 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1801407 00:23:13.410 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1801407 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.670 rmmod nvme_tcp 00:23:13.670 rmmod nvme_fabrics 00:23:13.670 rmmod nvme_keyring 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1801220 ']' 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1801220 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1801220 ']' 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1801220 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1801220 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1801220' 00:23:13.670 killing process with pid 1801220 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1801220 00:23:13.670 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1801220 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.931 13:19:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.842 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.842 00:23:15.842 real 0m15.044s 00:23:15.842 user 0m11.499s 00:23:15.842 sys 0m6.893s 00:23:15.842 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.843 13:19:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:15.843 ************************************ 00:23:15.843 END TEST nvmf_nsid 00:23:15.843 ************************************ 00:23:16.103 13:19:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:16.103 00:23:16.103 real 13m10.060s 00:23:16.103 user 27m28.380s 00:23:16.103 sys 3m55.842s 00:23:16.103 13:19:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:16.103 13:19:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:16.103 ************************************ 00:23:16.103 END TEST nvmf_target_extra 00:23:16.103 ************************************ 00:23:16.103 13:19:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:16.103 13:19:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:16.103 13:19:57 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:16.103 13:19:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.103 ************************************ 00:23:16.103 START TEST nvmf_host 00:23:16.103 ************************************ 00:23:16.103 13:19:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:16.103 * Looking for test storage... 00:23:16.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:16.103 13:19:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:16.103 13:19:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:16.103 13:19:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:16.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.365 --rc genhtml_branch_coverage=1 00:23:16.365 --rc genhtml_function_coverage=1 00:23:16.365 --rc genhtml_legend=1 00:23:16.365 --rc geninfo_all_blocks=1 00:23:16.365 --rc geninfo_unexecuted_blocks=1 00:23:16.365 00:23:16.365 ' 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:16.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.365 --rc genhtml_branch_coverage=1 00:23:16.365 --rc genhtml_function_coverage=1 00:23:16.365 --rc genhtml_legend=1 00:23:16.365 --rc geninfo_all_blocks=1 00:23:16.365 --rc geninfo_unexecuted_blocks=1 00:23:16.365 00:23:16.365 ' 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:16.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.365 --rc genhtml_branch_coverage=1 00:23:16.365 --rc genhtml_function_coverage=1 00:23:16.365 --rc genhtml_legend=1 00:23:16.365 --rc geninfo_all_blocks=1 00:23:16.365 --rc geninfo_unexecuted_blocks=1 00:23:16.365 00:23:16.365 ' 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:16.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.365 --rc genhtml_branch_coverage=1 00:23:16.365 --rc genhtml_function_coverage=1 00:23:16.365 --rc genhtml_legend=1 00:23:16.365 --rc geninfo_all_blocks=1 00:23:16.365 --rc geninfo_unexecuted_blocks=1 00:23:16.365 00:23:16.365 ' 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.365 13:19:58 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.366 ************************************ 00:23:16.366 START TEST nvmf_multicontroller 00:23:16.366 ************************************ 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:16.366 * Looking for test storage... 00:23:16.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:16.366 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:16.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.628 --rc genhtml_branch_coverage=1 00:23:16.628 --rc genhtml_function_coverage=1 00:23:16.628 --rc genhtml_legend=1 00:23:16.628 --rc geninfo_all_blocks=1 00:23:16.628 --rc geninfo_unexecuted_blocks=1 00:23:16.628 00:23:16.628 ' 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:16.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.628 --rc genhtml_branch_coverage=1 00:23:16.628 --rc genhtml_function_coverage=1 00:23:16.628 --rc genhtml_legend=1 00:23:16.628 --rc geninfo_all_blocks=1 00:23:16.628 --rc geninfo_unexecuted_blocks=1 00:23:16.628 00:23:16.628 ' 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:16.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.628 --rc genhtml_branch_coverage=1 00:23:16.628 --rc genhtml_function_coverage=1 00:23:16.628 --rc genhtml_legend=1 00:23:16.628 --rc geninfo_all_blocks=1 00:23:16.628 --rc geninfo_unexecuted_blocks=1 00:23:16.628 00:23:16.628 ' 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:16.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.628 --rc genhtml_branch_coverage=1 00:23:16.628 --rc genhtml_function_coverage=1 00:23:16.628 --rc genhtml_legend=1 00:23:16.628 --rc geninfo_all_blocks=1 00:23:16.628 --rc geninfo_unexecuted_blocks=1 00:23:16.628 00:23:16.628 ' 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.628 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:16.629 13:19:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:24.769 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:24.769 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:24.769 Found net devices under 0000:31:00.0: cvl_0_0 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.769 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:24.770 Found net devices under 0000:31:00.1: cvl_0_1 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:24.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:23:24.770 00:23:24.770 --- 10.0.0.2 ping statistics --- 00:23:24.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.770 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:23:24.770 00:23:24.770 --- 10.0.0.1 ping statistics --- 00:23:24.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.770 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:24.770 13:20:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1806625 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1806625 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1806625 ']' 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:24.770 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.770 [2024-11-06 13:20:06.103534] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:23:24.770 [2024-11-06 13:20:06.103597] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.770 [2024-11-06 13:20:06.205604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:24.770 [2024-11-06 13:20:06.258622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.770 [2024-11-06 13:20:06.258672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.770 [2024-11-06 13:20:06.258681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.770 [2024-11-06 13:20:06.258688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.770 [2024-11-06 13:20:06.258695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.770 [2024-11-06 13:20:06.260567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.770 [2024-11-06 13:20:06.260724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.770 [2024-11-06 13:20:06.260724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.342 [2024-11-06 13:20:06.987667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.342 13:20:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.342 Malloc0 00:23:25.342 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.342 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:25.342 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.342 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.342 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.342 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.342 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.342 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.342 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 [2024-11-06 13:20:07.058223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 [2024-11-06 13:20:07.070133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 Malloc1 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1806753 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1806753 /var/tmp/bdevperf.sock 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1806753 ']' 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:25.343 13:20:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.285 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:26.285 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:26.285 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.285 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.285 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.546 NVMe0n1 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.546 1 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.546 request: 00:23:26.546 { 00:23:26.546 "name": "NVMe0", 00:23:26.546 "trtype": "tcp", 00:23:26.546 "traddr": "10.0.0.2", 00:23:26.546 "adrfam": "ipv4", 00:23:26.546 "trsvcid": "4420", 00:23:26.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.546 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:26.546 "hostaddr": "10.0.0.1", 00:23:26.546 "prchk_reftag": false, 00:23:26.546 "prchk_guard": false, 00:23:26.546 "hdgst": false, 00:23:26.546 "ddgst": false, 00:23:26.546 "allow_unrecognized_csi": false, 00:23:26.546 "method": "bdev_nvme_attach_controller", 00:23:26.546 "req_id": 1 00:23:26.546 } 00:23:26.546 Got JSON-RPC error response 00:23:26.546 response: 00:23:26.546 { 00:23:26.546 "code": -114, 00:23:26.546 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.546 } 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.546 request: 00:23:26.546 { 00:23:26.546 "name": "NVMe0", 00:23:26.546 "trtype": "tcp", 00:23:26.546 "traddr": "10.0.0.2", 00:23:26.546 "adrfam": "ipv4", 00:23:26.546 "trsvcid": "4420", 00:23:26.546 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.546 "hostaddr": "10.0.0.1", 00:23:26.546 "prchk_reftag": false, 00:23:26.546 "prchk_guard": false, 00:23:26.546 "hdgst": false, 00:23:26.546 "ddgst": false, 00:23:26.546 "allow_unrecognized_csi": false, 00:23:26.546 "method": "bdev_nvme_attach_controller", 00:23:26.546 "req_id": 1 00:23:26.546 } 00:23:26.546 Got JSON-RPC error response 00:23:26.546 response: 00:23:26.546 { 00:23:26.546 "code": -114, 00:23:26.546 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.546 } 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.546 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.547 request: 00:23:26.547 { 00:23:26.547 "name": "NVMe0", 00:23:26.547 "trtype": "tcp", 00:23:26.547 "traddr": "10.0.0.2", 00:23:26.547 "adrfam": "ipv4", 00:23:26.547 "trsvcid": "4420", 00:23:26.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.547 "hostaddr": "10.0.0.1", 00:23:26.547 "prchk_reftag": false, 00:23:26.547 "prchk_guard": false, 00:23:26.547 "hdgst": false, 00:23:26.547 "ddgst": false, 00:23:26.547 "multipath": "disable", 00:23:26.547 "allow_unrecognized_csi": false, 00:23:26.547 "method": "bdev_nvme_attach_controller", 00:23:26.547 "req_id": 1 00:23:26.547 } 00:23:26.547 Got JSON-RPC error response 00:23:26.547 response: 00:23:26.547 { 00:23:26.547 "code": -114, 00:23:26.547 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:26.547 } 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.547 request: 00:23:26.547 { 00:23:26.547 "name": "NVMe0", 00:23:26.547 "trtype": "tcp", 00:23:26.547 "traddr": "10.0.0.2", 00:23:26.547 "adrfam": "ipv4", 00:23:26.547 "trsvcid": "4420", 00:23:26.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.547 "hostaddr": "10.0.0.1", 00:23:26.547 "prchk_reftag": false, 00:23:26.547 "prchk_guard": false, 00:23:26.547 "hdgst": false, 00:23:26.547 "ddgst": false, 00:23:26.547 "multipath": "failover", 00:23:26.547 "allow_unrecognized_csi": false, 00:23:26.547 "method": "bdev_nvme_attach_controller", 00:23:26.547 "req_id": 1 00:23:26.547 } 00:23:26.547 Got JSON-RPC error response 00:23:26.547 response: 00:23:26.547 { 00:23:26.547 "code": -114, 00:23:26.547 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:26.547 } 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.547 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.808 NVMe0n1 00:23:26.808 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.808 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.808 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.808 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:26.808 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.808 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:26.808 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.808 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.069 00:23:27.069 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.069 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.069 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:27.069 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.069 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.069 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.069 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:27.069 13:20:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:28.011 { 00:23:28.011 "results": [ 00:23:28.011 { 00:23:28.011 "job": "NVMe0n1", 00:23:28.011 "core_mask": "0x1", 00:23:28.011 "workload": "write", 00:23:28.011 "status": "finished", 00:23:28.011 "queue_depth": 128, 00:23:28.011 "io_size": 4096, 00:23:28.011 "runtime": 1.006139, 00:23:28.011 "iops": 24989.58891365905, 00:23:28.011 "mibps": 97.61558169398066, 00:23:28.011 "io_failed": 0, 00:23:28.011 "io_timeout": 0, 00:23:28.011 "avg_latency_us": 5110.781841466809, 00:23:28.011 "min_latency_us": 2088.96, 00:23:28.011 "max_latency_us": 11468.8 00:23:28.011 } 00:23:28.011 ], 00:23:28.011 "core_count": 1 00:23:28.011 } 00:23:28.271 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:28.271 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.271 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.271 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.271 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:28.271 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1806753 00:23:28.271 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1806753 ']' 00:23:28.271 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1806753 00:23:28.271 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:28.272 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:28.272 13:20:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1806753 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1806753' 00:23:28.272 killing process with pid 1806753 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1806753 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1806753 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:28.272 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:28.272 [2024-11-06 13:20:07.201414] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:23:28.272 [2024-11-06 13:20:07.201483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806753 ] 00:23:28.272 [2024-11-06 13:20:07.295163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.272 [2024-11-06 13:20:07.348131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.272 [2024-11-06 13:20:08.769960] bdev.c:4688:bdev_name_add: *ERROR*: Bdev name aa7353e5-298e-4494-a56b-8b942189da2b already exists 00:23:28.272 [2024-11-06 13:20:08.770006] bdev.c:7833:bdev_register: *ERROR*: Unable to add uuid:aa7353e5-298e-4494-a56b-8b942189da2b alias for bdev NVMe1n1 00:23:28.272 [2024-11-06 13:20:08.770016] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:28.272 Running I/O for 1 seconds... 00:23:28.272 24936.00 IOPS, 97.41 MiB/s 00:23:28.272 Latency(us) 00:23:28.272 [2024-11-06T12:20:10.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.272 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:28.272 NVMe0n1 : 1.01 24989.59 97.62 0.00 0.00 5110.78 2088.96 11468.80 00:23:28.272 [2024-11-06T12:20:10.174Z] =================================================================================================================== 00:23:28.272 [2024-11-06T12:20:10.174Z] Total : 24989.59 97.62 0.00 0.00 5110.78 2088.96 11468.80 00:23:28.272 Received shutdown signal, test time was about 1.000000 seconds 00:23:28.272 00:23:28.272 Latency(us) 00:23:28.272 [2024-11-06T12:20:10.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.272 [2024-11-06T12:20:10.174Z] =================================================================================================================== 00:23:28.272 [2024-11-06T12:20:10.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.272 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.272 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.533 rmmod nvme_tcp 00:23:28.533 rmmod nvme_fabrics 00:23:28.533 rmmod nvme_keyring 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1806625 ']' 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1806625 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1806625 ']' 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1806625 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1806625 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1806625' 00:23:28.533 killing process with pid 1806625 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1806625 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1806625 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.533 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.795 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:28.795 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:28.795 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.795 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.795 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.795 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.795 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.795 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.795 13:20:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.710 13:20:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.710 00:23:30.710 real 0m14.403s 00:23:30.710 user 0m18.036s 00:23:30.710 sys 0m6.678s 00:23:30.710 13:20:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:30.710 13:20:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.710 ************************************ 00:23:30.710 END TEST nvmf_multicontroller 00:23:30.710 ************************************ 00:23:30.710 13:20:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.710 13:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:30.710 13:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:30.710 13:20:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.710 ************************************ 00:23:30.710 START TEST nvmf_aer 00:23:30.710 ************************************ 00:23:30.710 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.972 * Looking for test storage... 00:23:30.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.972 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:30.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.973 --rc genhtml_branch_coverage=1 00:23:30.973 --rc genhtml_function_coverage=1 00:23:30.973 --rc genhtml_legend=1 00:23:30.973 --rc geninfo_all_blocks=1 00:23:30.973 --rc geninfo_unexecuted_blocks=1 00:23:30.973 00:23:30.973 ' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:30.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.973 --rc genhtml_branch_coverage=1 00:23:30.973 --rc genhtml_function_coverage=1 00:23:30.973 --rc genhtml_legend=1 00:23:30.973 --rc geninfo_all_blocks=1 00:23:30.973 --rc geninfo_unexecuted_blocks=1 00:23:30.973 00:23:30.973 ' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:30.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.973 --rc genhtml_branch_coverage=1 00:23:30.973 --rc genhtml_function_coverage=1 00:23:30.973 --rc genhtml_legend=1 00:23:30.973 --rc geninfo_all_blocks=1 00:23:30.973 --rc geninfo_unexecuted_blocks=1 00:23:30.973 00:23:30.973 ' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:30.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.973 --rc genhtml_branch_coverage=1 00:23:30.973 --rc genhtml_function_coverage=1 00:23:30.973 --rc genhtml_legend=1 00:23:30.973 --rc geninfo_all_blocks=1 00:23:30.973 --rc geninfo_unexecuted_blocks=1 00:23:30.973 00:23:30.973 ' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.973 13:20:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:39.113 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.113 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:39.114 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:39.114 Found net devices under 0000:31:00.0: cvl_0_0 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:39.114 Found net devices under 0000:31:00.1: cvl_0_1 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:23:39.114 00:23:39.114 --- 10.0.0.2 ping statistics --- 00:23:39.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.114 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:23:39.114 00:23:39.114 --- 10.0.0.1 ping statistics --- 00:23:39.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.114 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1811663 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1811663 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 1811663 ']' 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:39.114 13:20:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.114 [2024-11-06 13:20:20.515299] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:23:39.115 [2024-11-06 13:20:20.515366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.115 [2024-11-06 13:20:20.618558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.115 [2024-11-06 13:20:20.672539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.115 [2024-11-06 13:20:20.672594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.115 [2024-11-06 13:20:20.672603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.115 [2024-11-06 13:20:20.672610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.115 [2024-11-06 13:20:20.672617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.115 [2024-11-06 13:20:20.674883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.115 [2024-11-06 13:20:20.675022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.115 [2024-11-06 13:20:20.675180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.115 [2024-11-06 13:20:20.675181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.687 [2024-11-06 13:20:21.391774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.687 Malloc0 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.687 [2024-11-06 13:20:21.466951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:39.687 [ 00:23:39.687 { 00:23:39.687 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:39.687 "subtype": "Discovery", 00:23:39.687 "listen_addresses": [], 00:23:39.687 "allow_any_host": true, 00:23:39.687 "hosts": [] 00:23:39.687 }, 00:23:39.687 { 00:23:39.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.687 "subtype": "NVMe", 00:23:39.687 "listen_addresses": [ 00:23:39.687 { 00:23:39.687 "trtype": "TCP", 00:23:39.687 "adrfam": "IPv4", 00:23:39.687 "traddr": "10.0.0.2", 00:23:39.687 "trsvcid": "4420" 00:23:39.687 } 00:23:39.687 ], 00:23:39.687 "allow_any_host": true, 00:23:39.687 "hosts": [], 00:23:39.687 "serial_number": "SPDK00000000000001", 00:23:39.687 "model_number": "SPDK bdev Controller", 00:23:39.687 "max_namespaces": 2, 00:23:39.687 "min_cntlid": 1, 00:23:39.687 "max_cntlid": 65519, 00:23:39.687 "namespaces": [ 00:23:39.687 { 00:23:39.687 "nsid": 1, 00:23:39.687 "bdev_name": "Malloc0", 00:23:39.687 "name": "Malloc0", 00:23:39.687 "nguid": "CEB00C37C7BA4CA1BD1B85FBC33D4A79", 00:23:39.687 "uuid": "ceb00c37-c7ba-4ca1-bd1b-85fbc33d4a79" 00:23:39.687 } 00:23:39.687 ] 00:23:39.687 } 00:23:39.687 ] 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.687 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:39.688 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:39.688 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1811831 00:23:39.688 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:39.688 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:39.688 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:23:39.688 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.688 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:23:39.688 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:23:39.688 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.949 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.210 Malloc1 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.210 Asynchronous Event Request test 00:23:40.210 Attaching to 10.0.0.2 00:23:40.210 Attached to 10.0.0.2 00:23:40.210 Registering asynchronous event callbacks... 00:23:40.210 Starting namespace attribute notice tests for all controllers... 00:23:40.210 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:40.210 aer_cb - Changed Namespace 00:23:40.210 Cleaning up... 00:23:40.210 [ 00:23:40.210 { 00:23:40.210 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:40.210 "subtype": "Discovery", 00:23:40.210 "listen_addresses": [], 00:23:40.210 "allow_any_host": true, 00:23:40.210 "hosts": [] 00:23:40.210 }, 00:23:40.210 { 00:23:40.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.210 "subtype": "NVMe", 00:23:40.210 "listen_addresses": [ 00:23:40.210 { 00:23:40.210 "trtype": "TCP", 00:23:40.210 "adrfam": "IPv4", 00:23:40.210 "traddr": "10.0.0.2", 00:23:40.210 "trsvcid": "4420" 00:23:40.210 } 00:23:40.210 ], 00:23:40.210 "allow_any_host": true, 00:23:40.210 "hosts": [], 00:23:40.210 "serial_number": "SPDK00000000000001", 00:23:40.210 "model_number": "SPDK bdev Controller", 00:23:40.210 "max_namespaces": 2, 00:23:40.210 "min_cntlid": 1, 00:23:40.210 "max_cntlid": 65519, 00:23:40.210 "namespaces": [ 00:23:40.210 { 00:23:40.210 "nsid": 1, 00:23:40.210 "bdev_name": "Malloc0", 00:23:40.210 "name": "Malloc0", 00:23:40.210 "nguid": "CEB00C37C7BA4CA1BD1B85FBC33D4A79", 00:23:40.210 "uuid": "ceb00c37-c7ba-4ca1-bd1b-85fbc33d4a79" 00:23:40.210 }, 00:23:40.210 { 00:23:40.210 "nsid": 2, 00:23:40.210 "bdev_name": "Malloc1", 00:23:40.210 "name": "Malloc1", 00:23:40.210 "nguid": "14BEFFDC5BDD4BE984DE4D16F823307C", 00:23:40.210 "uuid": "14beffdc-5bdd-4be9-84de-4d16f823307c" 00:23:40.210 } 00:23:40.210 ] 00:23:40.210 } 00:23:40.210 ] 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1811831 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.211 13:20:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.211 rmmod nvme_tcp 00:23:40.211 rmmod nvme_fabrics 00:23:40.211 rmmod nvme_keyring 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1811663 ']' 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1811663 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 1811663 ']' 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 1811663 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:40.211 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1811663 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1811663' 00:23:40.472 killing process with pid 1811663 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 1811663 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 1811663 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.472 13:20:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.018 00:23:43.018 real 0m11.758s 00:23:43.018 user 0m8.532s 00:23:43.018 sys 0m6.356s 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:43.018 ************************************ 00:23:43.018 END TEST nvmf_aer 00:23:43.018 ************************************ 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.018 ************************************ 00:23:43.018 START TEST nvmf_async_init 00:23:43.018 ************************************ 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:43.018 * Looking for test storage... 00:23:43.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.018 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:43.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.019 --rc genhtml_branch_coverage=1 00:23:43.019 --rc genhtml_function_coverage=1 00:23:43.019 --rc genhtml_legend=1 00:23:43.019 --rc geninfo_all_blocks=1 00:23:43.019 --rc geninfo_unexecuted_blocks=1 00:23:43.019 00:23:43.019 ' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:43.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.019 --rc genhtml_branch_coverage=1 00:23:43.019 --rc genhtml_function_coverage=1 00:23:43.019 --rc genhtml_legend=1 00:23:43.019 --rc geninfo_all_blocks=1 00:23:43.019 --rc geninfo_unexecuted_blocks=1 00:23:43.019 00:23:43.019 ' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:43.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.019 --rc genhtml_branch_coverage=1 00:23:43.019 --rc genhtml_function_coverage=1 00:23:43.019 --rc genhtml_legend=1 00:23:43.019 --rc geninfo_all_blocks=1 00:23:43.019 --rc geninfo_unexecuted_blocks=1 00:23:43.019 00:23:43.019 ' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:43.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.019 --rc genhtml_branch_coverage=1 00:23:43.019 --rc genhtml_function_coverage=1 00:23:43.019 --rc genhtml_legend=1 00:23:43.019 --rc geninfo_all_blocks=1 00:23:43.019 --rc geninfo_unexecuted_blocks=1 00:23:43.019 00:23:43.019 ' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3fee7abae5a142ec951c03d7e056f33f 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.019 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.020 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.020 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.020 13:20:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.164 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:51.164 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:51.165 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:51.165 Found net devices under 0000:31:00.0: cvl_0_0 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:51.165 Found net devices under 0000:31:00.1: cvl_0_1 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.165 13:20:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:23:51.165 00:23:51.165 --- 10.0.0.2 ping statistics --- 00:23:51.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.165 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:23:51.165 00:23:51.165 --- 10.0.0.1 ping statistics --- 00:23:51.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.165 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1816195 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1816195 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 1816195 ']' 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:51.165 13:20:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.165 [2024-11-06 13:20:32.356087] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:23:51.165 [2024-11-06 13:20:32.356150] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.165 [2024-11-06 13:20:32.456458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.165 [2024-11-06 13:20:32.507758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.165 [2024-11-06 13:20:32.507806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.165 [2024-11-06 13:20:32.507815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.165 [2024-11-06 13:20:32.507822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.165 [2024-11-06 13:20:32.507828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.165 [2024-11-06 13:20:32.508661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.428 [2024-11-06 13:20:33.218488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.428 null0 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3fee7abae5a142ec951c03d7e056f33f 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.428 [2024-11-06 13:20:33.278871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.428 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.689 nvme0n1 00:23:51.689 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.689 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.689 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.689 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.689 [ 00:23:51.689 { 00:23:51.689 "name": "nvme0n1", 00:23:51.689 "aliases": [ 00:23:51.689 "3fee7aba-e5a1-42ec-951c-03d7e056f33f" 00:23:51.689 ], 00:23:51.689 "product_name": "NVMe disk", 00:23:51.689 "block_size": 512, 00:23:51.689 "num_blocks": 2097152, 00:23:51.689 "uuid": "3fee7aba-e5a1-42ec-951c-03d7e056f33f", 00:23:51.689 "numa_id": 0, 00:23:51.689 "assigned_rate_limits": { 00:23:51.689 "rw_ios_per_sec": 0, 00:23:51.689 "rw_mbytes_per_sec": 0, 00:23:51.689 "r_mbytes_per_sec": 0, 00:23:51.689 "w_mbytes_per_sec": 0 00:23:51.689 }, 00:23:51.689 "claimed": false, 00:23:51.689 "zoned": false, 00:23:51.689 "supported_io_types": { 00:23:51.689 "read": true, 00:23:51.689 "write": true, 00:23:51.689 "unmap": false, 00:23:51.689 "flush": true, 00:23:51.689 "reset": true, 00:23:51.689 "nvme_admin": true, 00:23:51.689 "nvme_io": true, 00:23:51.689 "nvme_io_md": false, 00:23:51.689 "write_zeroes": true, 00:23:51.689 "zcopy": false, 00:23:51.689 "get_zone_info": false, 00:23:51.689 "zone_management": false, 00:23:51.689 "zone_append": false, 00:23:51.689 "compare": true, 00:23:51.689 "compare_and_write": true, 00:23:51.689 "abort": true, 00:23:51.689 "seek_hole": false, 00:23:51.689 "seek_data": false, 00:23:51.689 "copy": true, 00:23:51.689 "nvme_iov_md": false 00:23:51.689 }, 00:23:51.689 "memory_domains": [ 00:23:51.689 { 00:23:51.689 "dma_device_id": "system", 00:23:51.689 "dma_device_type": 1 00:23:51.689 } 00:23:51.689 ], 00:23:51.689 "driver_specific": { 00:23:51.689 "nvme": [ 00:23:51.689 { 00:23:51.689 "trid": { 00:23:51.689 "trtype": "TCP", 00:23:51.689 "adrfam": "IPv4", 00:23:51.689 "traddr": "10.0.0.2", 00:23:51.689 "trsvcid": "4420", 00:23:51.689 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.689 }, 00:23:51.689 "ctrlr_data": { 00:23:51.689 "cntlid": 1, 00:23:51.689 "vendor_id": "0x8086", 00:23:51.689 "model_number": "SPDK bdev Controller", 00:23:51.689 "serial_number": "00000000000000000000", 00:23:51.689 "firmware_revision": "25.01", 00:23:51.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.689 "oacs": { 00:23:51.689 "security": 0, 00:23:51.689 "format": 0, 00:23:51.689 "firmware": 0, 00:23:51.689 "ns_manage": 0 00:23:51.689 }, 00:23:51.689 "multi_ctrlr": true, 00:23:51.689 "ana_reporting": false 00:23:51.689 }, 00:23:51.689 "vs": { 00:23:51.689 "nvme_version": "1.3" 00:23:51.689 }, 00:23:51.689 "ns_data": { 00:23:51.689 "id": 1, 00:23:51.689 "can_share": true 00:23:51.689 } 00:23:51.689 } 00:23:51.689 ], 00:23:51.689 "mp_policy": "active_passive" 00:23:51.689 } 00:23:51.689 } 00:23:51.689 ] 00:23:51.689 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.689 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:51.689 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.689 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.689 [2024-11-06 13:20:33.555321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:51.689 [2024-11-06 13:20:33.555403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19304a0 (9): Bad file descriptor 00:23:51.951 [2024-11-06 13:20:33.687847] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.951 [ 00:23:51.951 { 00:23:51.951 "name": "nvme0n1", 00:23:51.951 "aliases": [ 00:23:51.951 "3fee7aba-e5a1-42ec-951c-03d7e056f33f" 00:23:51.951 ], 00:23:51.951 "product_name": "NVMe disk", 00:23:51.951 "block_size": 512, 00:23:51.951 "num_blocks": 2097152, 00:23:51.951 "uuid": "3fee7aba-e5a1-42ec-951c-03d7e056f33f", 00:23:51.951 "numa_id": 0, 00:23:51.951 "assigned_rate_limits": { 00:23:51.951 "rw_ios_per_sec": 0, 00:23:51.951 "rw_mbytes_per_sec": 0, 00:23:51.951 "r_mbytes_per_sec": 0, 00:23:51.951 "w_mbytes_per_sec": 0 00:23:51.951 }, 00:23:51.951 "claimed": false, 00:23:51.951 "zoned": false, 00:23:51.951 "supported_io_types": { 00:23:51.951 "read": true, 00:23:51.951 "write": true, 00:23:51.951 "unmap": false, 00:23:51.951 "flush": true, 00:23:51.951 "reset": true, 00:23:51.951 "nvme_admin": true, 00:23:51.951 "nvme_io": true, 00:23:51.951 "nvme_io_md": false, 00:23:51.951 "write_zeroes": true, 00:23:51.951 "zcopy": false, 00:23:51.951 "get_zone_info": false, 00:23:51.951 "zone_management": false, 00:23:51.951 "zone_append": false, 00:23:51.951 "compare": true, 00:23:51.951 "compare_and_write": true, 00:23:51.951 "abort": true, 00:23:51.951 "seek_hole": false, 00:23:51.951 "seek_data": false, 00:23:51.951 "copy": true, 00:23:51.951 "nvme_iov_md": false 00:23:51.951 }, 00:23:51.951 "memory_domains": [ 00:23:51.951 { 00:23:51.951 "dma_device_id": "system", 00:23:51.951 "dma_device_type": 1 00:23:51.951 } 00:23:51.951 ], 00:23:51.951 "driver_specific": { 00:23:51.951 "nvme": [ 00:23:51.951 { 00:23:51.951 "trid": { 00:23:51.951 "trtype": "TCP", 00:23:51.951 "adrfam": "IPv4", 00:23:51.951 "traddr": "10.0.0.2", 00:23:51.951 "trsvcid": "4420", 00:23:51.951 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:51.951 }, 00:23:51.951 "ctrlr_data": { 00:23:51.951 "cntlid": 2, 00:23:51.951 "vendor_id": "0x8086", 00:23:51.951 "model_number": "SPDK bdev Controller", 00:23:51.951 "serial_number": "00000000000000000000", 00:23:51.951 "firmware_revision": "25.01", 00:23:51.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.951 "oacs": { 00:23:51.951 "security": 0, 00:23:51.951 "format": 0, 00:23:51.951 "firmware": 0, 00:23:51.951 "ns_manage": 0 00:23:51.951 }, 00:23:51.951 "multi_ctrlr": true, 00:23:51.951 "ana_reporting": false 00:23:51.951 }, 00:23:51.951 "vs": { 00:23:51.951 "nvme_version": "1.3" 00:23:51.951 }, 00:23:51.951 "ns_data": { 00:23:51.951 "id": 1, 00:23:51.951 "can_share": true 00:23:51.951 } 00:23:51.951 } 00:23:51.951 ], 00:23:51.951 "mp_policy": "active_passive" 00:23:51.951 } 00:23:51.951 } 00:23:51.951 ] 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.AGOfuG582d 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.AGOfuG582d 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.AGOfuG582d 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.951 [2024-11-06 13:20:33.780031] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.951 [2024-11-06 13:20:33.780189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.951 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.951 [2024-11-06 13:20:33.804107] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.212 nvme0n1 00:23:52.212 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.212 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:52.212 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.212 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.212 [ 00:23:52.212 { 00:23:52.212 "name": "nvme0n1", 00:23:52.212 "aliases": [ 00:23:52.212 "3fee7aba-e5a1-42ec-951c-03d7e056f33f" 00:23:52.212 ], 00:23:52.212 "product_name": "NVMe disk", 00:23:52.212 "block_size": 512, 00:23:52.212 "num_blocks": 2097152, 00:23:52.212 "uuid": "3fee7aba-e5a1-42ec-951c-03d7e056f33f", 00:23:52.212 "numa_id": 0, 00:23:52.212 "assigned_rate_limits": { 00:23:52.212 "rw_ios_per_sec": 0, 00:23:52.213 "rw_mbytes_per_sec": 0, 00:23:52.213 "r_mbytes_per_sec": 0, 00:23:52.213 "w_mbytes_per_sec": 0 00:23:52.213 }, 00:23:52.213 "claimed": false, 00:23:52.213 "zoned": false, 00:23:52.213 "supported_io_types": { 00:23:52.213 "read": true, 00:23:52.213 "write": true, 00:23:52.213 "unmap": false, 00:23:52.213 "flush": true, 00:23:52.213 "reset": true, 00:23:52.213 "nvme_admin": true, 00:23:52.213 "nvme_io": true, 00:23:52.213 "nvme_io_md": false, 00:23:52.213 "write_zeroes": true, 00:23:52.213 "zcopy": false, 00:23:52.213 "get_zone_info": false, 00:23:52.213 "zone_management": false, 00:23:52.213 "zone_append": false, 00:23:52.213 "compare": true, 00:23:52.213 "compare_and_write": true, 00:23:52.213 "abort": true, 00:23:52.213 "seek_hole": false, 00:23:52.213 "seek_data": false, 00:23:52.213 "copy": true, 00:23:52.213 "nvme_iov_md": false 00:23:52.213 }, 00:23:52.213 "memory_domains": [ 00:23:52.213 { 00:23:52.213 "dma_device_id": "system", 00:23:52.213 "dma_device_type": 1 00:23:52.213 } 00:23:52.213 ], 00:23:52.213 "driver_specific": { 00:23:52.213 "nvme": [ 00:23:52.213 { 00:23:52.213 "trid": { 00:23:52.213 "trtype": "TCP", 00:23:52.213 "adrfam": "IPv4", 00:23:52.213 "traddr": "10.0.0.2", 00:23:52.213 "trsvcid": "4421", 00:23:52.213 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:52.213 }, 00:23:52.213 "ctrlr_data": { 00:23:52.213 "cntlid": 3, 00:23:52.213 "vendor_id": "0x8086", 00:23:52.213 "model_number": "SPDK bdev Controller", 00:23:52.213 "serial_number": "00000000000000000000", 00:23:52.213 "firmware_revision": "25.01", 00:23:52.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.213 "oacs": { 00:23:52.213 "security": 0, 00:23:52.213 "format": 0, 00:23:52.213 "firmware": 0, 00:23:52.213 "ns_manage": 0 00:23:52.213 }, 00:23:52.213 "multi_ctrlr": true, 00:23:52.213 "ana_reporting": false 00:23:52.213 }, 00:23:52.213 "vs": { 00:23:52.213 "nvme_version": "1.3" 00:23:52.213 }, 00:23:52.213 "ns_data": { 00:23:52.213 "id": 1, 00:23:52.213 "can_share": true 00:23:52.213 } 00:23:52.213 } 00:23:52.213 ], 00:23:52.213 "mp_policy": "active_passive" 00:23:52.213 } 00:23:52.213 } 00:23:52.213 ] 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.AGOfuG582d 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.213 rmmod nvme_tcp 00:23:52.213 rmmod nvme_fabrics 00:23:52.213 rmmod nvme_keyring 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1816195 ']' 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1816195 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 1816195 ']' 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 1816195 00:23:52.213 13:20:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:23:52.213 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:52.213 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1816195 00:23:52.213 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:52.213 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:52.213 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1816195' 00:23:52.213 killing process with pid 1816195 00:23:52.213 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 1816195 00:23:52.213 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 1816195 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.473 13:20:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.019 13:20:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:55.019 00:23:55.019 real 0m11.861s 00:23:55.019 user 0m4.280s 00:23:55.019 sys 0m6.117s 00:23:55.019 13:20:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:55.019 13:20:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:55.019 ************************************ 00:23:55.019 END TEST nvmf_async_init 00:23:55.019 ************************************ 00:23:55.019 13:20:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:55.019 13:20:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:55.019 13:20:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:55.019 13:20:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.020 ************************************ 00:23:55.020 START TEST dma 00:23:55.020 ************************************ 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:55.020 * Looking for test storage... 00:23:55.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:55.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.020 --rc genhtml_branch_coverage=1 00:23:55.020 --rc genhtml_function_coverage=1 00:23:55.020 --rc genhtml_legend=1 00:23:55.020 --rc geninfo_all_blocks=1 00:23:55.020 --rc geninfo_unexecuted_blocks=1 00:23:55.020 00:23:55.020 ' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:55.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.020 --rc genhtml_branch_coverage=1 00:23:55.020 --rc genhtml_function_coverage=1 00:23:55.020 --rc genhtml_legend=1 00:23:55.020 --rc geninfo_all_blocks=1 00:23:55.020 --rc geninfo_unexecuted_blocks=1 00:23:55.020 00:23:55.020 ' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:55.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.020 --rc genhtml_branch_coverage=1 00:23:55.020 --rc genhtml_function_coverage=1 00:23:55.020 --rc genhtml_legend=1 00:23:55.020 --rc geninfo_all_blocks=1 00:23:55.020 --rc geninfo_unexecuted_blocks=1 00:23:55.020 00:23:55.020 ' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:55.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.020 --rc genhtml_branch_coverage=1 00:23:55.020 --rc genhtml_function_coverage=1 00:23:55.020 --rc genhtml_legend=1 00:23:55.020 --rc geninfo_all_blocks=1 00:23:55.020 --rc geninfo_unexecuted_blocks=1 00:23:55.020 00:23:55.020 ' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:55.020 00:23:55.020 real 0m0.240s 00:23:55.020 user 0m0.135s 00:23:55.020 sys 0m0.119s 00:23:55.020 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:55.021 ************************************ 00:23:55.021 END TEST dma 00:23:55.021 ************************************ 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.021 ************************************ 00:23:55.021 START TEST nvmf_identify 00:23:55.021 ************************************ 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:55.021 * Looking for test storage... 00:23:55.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:55.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.021 --rc genhtml_branch_coverage=1 00:23:55.021 --rc genhtml_function_coverage=1 00:23:55.021 --rc genhtml_legend=1 00:23:55.021 --rc geninfo_all_blocks=1 00:23:55.021 --rc geninfo_unexecuted_blocks=1 00:23:55.021 00:23:55.021 ' 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:55.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.021 --rc genhtml_branch_coverage=1 00:23:55.021 --rc genhtml_function_coverage=1 00:23:55.021 --rc genhtml_legend=1 00:23:55.021 --rc geninfo_all_blocks=1 00:23:55.021 --rc geninfo_unexecuted_blocks=1 00:23:55.021 00:23:55.021 ' 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:55.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.021 --rc genhtml_branch_coverage=1 00:23:55.021 --rc genhtml_function_coverage=1 00:23:55.021 --rc genhtml_legend=1 00:23:55.021 --rc geninfo_all_blocks=1 00:23:55.021 --rc geninfo_unexecuted_blocks=1 00:23:55.021 00:23:55.021 ' 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:55.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.021 --rc genhtml_branch_coverage=1 00:23:55.021 --rc genhtml_function_coverage=1 00:23:55.021 --rc genhtml_legend=1 00:23:55.021 --rc geninfo_all_blocks=1 00:23:55.021 --rc geninfo_unexecuted_blocks=1 00:23:55.021 00:23:55.021 ' 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.021 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.283 13:20:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.425 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:03.426 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:03.426 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:03.426 Found net devices under 0000:31:00.0: cvl_0_0 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:03.426 Found net devices under 0000:31:00.1: cvl_0_1 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:24:03.426 00:24:03.426 --- 10.0.0.2 ping statistics --- 00:24:03.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.426 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:24:03.426 00:24:03.426 --- 10.0.0.1 ping statistics --- 00:24:03.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.426 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1820960 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1820960 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 1820960 ']' 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:03.426 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.426 [2024-11-06 13:20:44.661855] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:24:03.426 [2024-11-06 13:20:44.661920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.427 [2024-11-06 13:20:44.767272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.427 [2024-11-06 13:20:44.821832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.427 [2024-11-06 13:20:44.821879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.427 [2024-11-06 13:20:44.821888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.427 [2024-11-06 13:20:44.821895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.427 [2024-11-06 13:20:44.821901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.427 [2024-11-06 13:20:44.823953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.427 [2024-11-06 13:20:44.824112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.427 [2024-11-06 13:20:44.824248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.427 [2024-11-06 13:20:44.824249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.688 [2024-11-06 13:20:45.491934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.688 Malloc0 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.688 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.952 [2024-11-06 13:20:45.611908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:03.952 [ 00:24:03.952 { 00:24:03.952 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:03.952 "subtype": "Discovery", 00:24:03.952 "listen_addresses": [ 00:24:03.952 { 00:24:03.952 "trtype": "TCP", 00:24:03.952 "adrfam": "IPv4", 00:24:03.952 "traddr": "10.0.0.2", 00:24:03.952 "trsvcid": "4420" 00:24:03.952 } 00:24:03.952 ], 00:24:03.952 "allow_any_host": true, 00:24:03.952 "hosts": [] 00:24:03.952 }, 00:24:03.952 { 00:24:03.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.952 "subtype": "NVMe", 00:24:03.952 "listen_addresses": [ 00:24:03.952 { 00:24:03.952 "trtype": "TCP", 00:24:03.952 "adrfam": "IPv4", 00:24:03.952 "traddr": "10.0.0.2", 00:24:03.952 "trsvcid": "4420" 00:24:03.952 } 00:24:03.952 ], 00:24:03.952 "allow_any_host": true, 00:24:03.952 "hosts": [], 00:24:03.952 "serial_number": "SPDK00000000000001", 00:24:03.952 "model_number": "SPDK bdev Controller", 00:24:03.952 "max_namespaces": 32, 00:24:03.952 "min_cntlid": 1, 00:24:03.952 "max_cntlid": 65519, 00:24:03.952 "namespaces": [ 00:24:03.952 { 00:24:03.952 "nsid": 1, 00:24:03.952 "bdev_name": "Malloc0", 00:24:03.952 "name": "Malloc0", 00:24:03.952 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:03.952 "eui64": "ABCDEF0123456789", 00:24:03.952 "uuid": "9acacab9-22b8-4c88-ac94-6cd75e5a2733" 00:24:03.952 } 00:24:03.952 ] 00:24:03.952 } 00:24:03.952 ] 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.952 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:03.952 [2024-11-06 13:20:45.676990] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:24:03.952 [2024-11-06 13:20:45.677037] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821105 ] 00:24:03.952 [2024-11-06 13:20:45.734591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:03.952 [2024-11-06 13:20:45.734664] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:03.952 [2024-11-06 13:20:45.734670] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:03.952 [2024-11-06 13:20:45.734690] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:03.952 [2024-11-06 13:20:45.734704] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:03.952 [2024-11-06 13:20:45.735678] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:03.952 [2024-11-06 13:20:45.735726] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f5f550 0 00:24:03.952 [2024-11-06 13:20:45.745765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:03.952 [2024-11-06 13:20:45.745783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:03.952 [2024-11-06 13:20:45.745789] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:03.952 [2024-11-06 13:20:45.745793] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:03.952 [2024-11-06 13:20:45.745840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.952 [2024-11-06 13:20:45.745848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.952 [2024-11-06 13:20:45.745853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.952 [2024-11-06 13:20:45.745870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:03.952 [2024-11-06 13:20:45.745894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.952 [2024-11-06 13:20:45.753763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.952 [2024-11-06 13:20:45.753774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.952 [2024-11-06 13:20:45.753784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.952 [2024-11-06 13:20:45.753790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:03.952 [2024-11-06 13:20:45.753803] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:03.952 [2024-11-06 13:20:45.753812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:03.952 [2024-11-06 13:20:45.753818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:03.952 [2024-11-06 13:20:45.753836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.952 [2024-11-06 13:20:45.753840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.952 [2024-11-06 13:20:45.753844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.952 [2024-11-06 13:20:45.753853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.952 [2024-11-06 13:20:45.753870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.952 [2024-11-06 13:20:45.754113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.952 [2024-11-06 13:20:45.754119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.952 [2024-11-06 13:20:45.754123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.952 [2024-11-06 13:20:45.754127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:03.952 [2024-11-06 13:20:45.754133] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:03.952 [2024-11-06 13:20:45.754142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:03.952 [2024-11-06 13:20:45.754149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.952 [2024-11-06 13:20:45.754153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.952 [2024-11-06 13:20:45.754157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.952 [2024-11-06 13:20:45.754163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.952 [2024-11-06 13:20:45.754174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.952 [2024-11-06 13:20:45.754374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.952 [2024-11-06 13:20:45.754381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.952 [2024-11-06 13:20:45.754384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.952 [2024-11-06 13:20:45.754388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:03.952 [2024-11-06 13:20:45.754394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:03.952 [2024-11-06 13:20:45.754403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:03.952 [2024-11-06 13:20:45.754410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.754413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.754417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.754424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.953 [2024-11-06 13:20:45.754434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.953 [2024-11-06 13:20:45.754627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.953 [2024-11-06 13:20:45.754634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.953 [2024-11-06 13:20:45.754641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.754645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:03.953 [2024-11-06 13:20:45.754651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:03.953 [2024-11-06 13:20:45.754660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.754664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.754668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.754675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.953 [2024-11-06 13:20:45.754685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.953 [2024-11-06 13:20:45.754915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.953 [2024-11-06 13:20:45.754922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.953 [2024-11-06 13:20:45.754926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.754929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:03.953 [2024-11-06 13:20:45.754935] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:03.953 [2024-11-06 13:20:45.754940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:03.953 [2024-11-06 13:20:45.754948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:03.953 [2024-11-06 13:20:45.755058] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:03.953 [2024-11-06 13:20:45.755065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:03.953 [2024-11-06 13:20:45.755077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.755091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.953 [2024-11-06 13:20:45.755102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.953 [2024-11-06 13:20:45.755307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.953 [2024-11-06 13:20:45.755313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.953 [2024-11-06 13:20:45.755317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:03.953 [2024-11-06 13:20:45.755326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:03.953 [2024-11-06 13:20:45.755335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.755349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.953 [2024-11-06 13:20:45.755360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.953 [2024-11-06 13:20:45.755547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.953 [2024-11-06 13:20:45.755559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.953 [2024-11-06 13:20:45.755563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:03.953 [2024-11-06 13:20:45.755572] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:03.953 [2024-11-06 13:20:45.755577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:03.953 [2024-11-06 13:20:45.755585] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:03.953 [2024-11-06 13:20:45.755594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:03.953 [2024-11-06 13:20:45.755605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.755616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.953 [2024-11-06 13:20:45.755626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.953 [2024-11-06 13:20:45.755859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.953 [2024-11-06 13:20:45.755867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.953 [2024-11-06 13:20:45.755871] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755876] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5f550): datao=0, datal=4096, cccid=0 00:24:03.953 [2024-11-06 13:20:45.755880] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fc1100) on tqpair(0x1f5f550): expected_datao=0, payload_size=4096 00:24:03.953 [2024-11-06 13:20:45.755885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755894] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.755899] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.953 [2024-11-06 13:20:45.756061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.953 [2024-11-06 13:20:45.756065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:03.953 [2024-11-06 13:20:45.756078] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:03.953 [2024-11-06 13:20:45.756083] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:03.953 [2024-11-06 13:20:45.756088] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:03.953 [2024-11-06 13:20:45.756097] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:03.953 [2024-11-06 13:20:45.756102] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:03.953 [2024-11-06 13:20:45.756107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:03.953 [2024-11-06 13:20:45.756119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:03.953 [2024-11-06 13:20:45.756126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.756144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.953 [2024-11-06 13:20:45.756155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.953 [2024-11-06 13:20:45.756372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.953 [2024-11-06 13:20:45.756378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.953 [2024-11-06 13:20:45.756382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:03.953 [2024-11-06 13:20:45.756395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.756409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.953 [2024-11-06 13:20:45.756415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.756429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.953 [2024-11-06 13:20:45.756435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.756448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.953 [2024-11-06 13:20:45.756454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5f550) 00:24:03.953 [2024-11-06 13:20:45.756467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.953 [2024-11-06 13:20:45.756472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:03.953 [2024-11-06 13:20:45.756481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:03.953 [2024-11-06 13:20:45.756488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.953 [2024-11-06 13:20:45.756491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5f550) 00:24:03.954 [2024-11-06 13:20:45.756498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.954 [2024-11-06 13:20:45.756510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1100, cid 0, qid 0 00:24:03.954 [2024-11-06 13:20:45.756515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1280, cid 1, qid 0 00:24:03.954 [2024-11-06 13:20:45.756520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1400, cid 2, qid 0 00:24:03.954 [2024-11-06 13:20:45.756525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1580, cid 3, qid 0 00:24:03.954 [2024-11-06 13:20:45.756530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1700, cid 4, qid 0 00:24:03.954 [2024-11-06 13:20:45.756775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.954 [2024-11-06 13:20:45.756783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.954 [2024-11-06 13:20:45.756786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.756790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1700) on tqpair=0x1f5f550 00:24:03.954 [2024-11-06 13:20:45.756799] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:03.954 [2024-11-06 13:20:45.756805] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:03.954 [2024-11-06 13:20:45.756816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.756820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5f550) 00:24:03.954 [2024-11-06 13:20:45.756826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.954 [2024-11-06 13:20:45.756837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1700, cid 4, qid 0 00:24:03.954 [2024-11-06 13:20:45.757049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.954 [2024-11-06 13:20:45.757055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.954 [2024-11-06 13:20:45.757059] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.757063] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5f550): datao=0, datal=4096, cccid=4 00:24:03.954 [2024-11-06 13:20:45.757067] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fc1700) on tqpair(0x1f5f550): expected_datao=0, payload_size=4096 00:24:03.954 [2024-11-06 13:20:45.757072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.757083] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.757087] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.798943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.954 [2024-11-06 13:20:45.798958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.954 [2024-11-06 13:20:45.798962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.798966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1700) on tqpair=0x1f5f550 00:24:03.954 [2024-11-06 13:20:45.798984] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:03.954 [2024-11-06 13:20:45.799018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.799022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5f550) 00:24:03.954 [2024-11-06 13:20:45.799032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.954 [2024-11-06 13:20:45.799040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.799044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.799048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f5f550) 00:24:03.954 [2024-11-06 13:20:45.799055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.954 [2024-11-06 13:20:45.799072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1700, cid 4, qid 0 00:24:03.954 [2024-11-06 13:20:45.799078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1880, cid 5, qid 0 00:24:03.954 [2024-11-06 13:20:45.799331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.954 [2024-11-06 13:20:45.799337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.954 [2024-11-06 13:20:45.799341] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.799354] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5f550): datao=0, datal=1024, cccid=4 00:24:03.954 [2024-11-06 13:20:45.799359] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fc1700) on tqpair(0x1f5f550): expected_datao=0, payload_size=1024 00:24:03.954 [2024-11-06 13:20:45.799363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.799370] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.799374] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.799380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.954 [2024-11-06 13:20:45.799386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.954 [2024-11-06 13:20:45.799389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.799393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1880) on tqpair=0x1f5f550 00:24:03.954 [2024-11-06 13:20:45.840964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.954 [2024-11-06 13:20:45.840975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.954 [2024-11-06 13:20:45.840978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.840982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1700) on tqpair=0x1f5f550 00:24:03.954 [2024-11-06 13:20:45.840995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.840999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5f550) 00:24:03.954 [2024-11-06 13:20:45.841007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.954 [2024-11-06 13:20:45.841023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1700, cid 4, qid 0 00:24:03.954 [2024-11-06 13:20:45.841285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.954 [2024-11-06 13:20:45.841293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.954 [2024-11-06 13:20:45.841296] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.841300] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5f550): datao=0, datal=3072, cccid=4 00:24:03.954 [2024-11-06 13:20:45.841304] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fc1700) on tqpair(0x1f5f550): expected_datao=0, payload_size=3072 00:24:03.954 [2024-11-06 13:20:45.841309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.841316] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.841319] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.841450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:03.954 [2024-11-06 13:20:45.841457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:03.954 [2024-11-06 13:20:45.841461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.841464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1700) on tqpair=0x1f5f550 00:24:03.954 [2024-11-06 13:20:45.841473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.841477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f5f550) 00:24:03.954 [2024-11-06 13:20:45.841484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.954 [2024-11-06 13:20:45.841498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1700, cid 4, qid 0 00:24:03.954 [2024-11-06 13:20:45.841741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:03.954 [2024-11-06 13:20:45.841753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:03.954 [2024-11-06 13:20:45.841757] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.841765] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f5f550): datao=0, datal=8, cccid=4 00:24:03.954 [2024-11-06 13:20:45.841770] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fc1700) on tqpair(0x1f5f550): expected_datao=0, payload_size=8 00:24:03.954 [2024-11-06 13:20:45.841774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.841781] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:03.954 [2024-11-06 13:20:45.841784] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:04.219 [2024-11-06 13:20:45.881982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.219 [2024-11-06 13:20:45.881996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.219 [2024-11-06 13:20:45.882000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.219 [2024-11-06 13:20:45.882004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1700) on tqpair=0x1f5f550 00:24:04.219 ===================================================== 00:24:04.219 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:04.219 ===================================================== 00:24:04.219 Controller Capabilities/Features 00:24:04.219 ================================ 00:24:04.219 Vendor ID: 0000 00:24:04.219 Subsystem Vendor ID: 0000 00:24:04.219 Serial Number: .................... 00:24:04.219 Model Number: ........................................ 00:24:04.219 Firmware Version: 25.01 00:24:04.219 Recommended Arb Burst: 0 00:24:04.219 IEEE OUI Identifier: 00 00 00 00:24:04.219 Multi-path I/O 00:24:04.219 May have multiple subsystem ports: No 00:24:04.219 May have multiple controllers: No 00:24:04.219 Associated with SR-IOV VF: No 00:24:04.219 Max Data Transfer Size: 131072 00:24:04.219 Max Number of Namespaces: 0 00:24:04.219 Max Number of I/O Queues: 1024 00:24:04.219 NVMe Specification Version (VS): 1.3 00:24:04.219 NVMe Specification Version (Identify): 1.3 00:24:04.219 Maximum Queue Entries: 128 00:24:04.219 Contiguous Queues Required: Yes 00:24:04.219 Arbitration Mechanisms Supported 00:24:04.219 Weighted Round Robin: Not Supported 00:24:04.219 Vendor Specific: Not Supported 00:24:04.219 Reset Timeout: 15000 ms 00:24:04.219 Doorbell Stride: 4 bytes 00:24:04.219 NVM Subsystem Reset: Not Supported 00:24:04.219 Command Sets Supported 00:24:04.219 NVM Command Set: Supported 00:24:04.219 Boot Partition: Not Supported 00:24:04.219 Memory Page Size Minimum: 4096 bytes 00:24:04.219 Memory Page Size Maximum: 4096 bytes 00:24:04.219 Persistent Memory Region: Not Supported 00:24:04.219 Optional Asynchronous Events Supported 00:24:04.219 Namespace Attribute Notices: Not Supported 00:24:04.219 Firmware Activation Notices: Not Supported 00:24:04.219 ANA Change Notices: Not Supported 00:24:04.219 PLE Aggregate Log Change Notices: Not Supported 00:24:04.219 LBA Status Info Alert Notices: Not Supported 00:24:04.219 EGE Aggregate Log Change Notices: Not Supported 00:24:04.219 Normal NVM Subsystem Shutdown event: Not Supported 00:24:04.219 Zone Descriptor Change Notices: Not Supported 00:24:04.219 Discovery Log Change Notices: Supported 00:24:04.219 Controller Attributes 00:24:04.219 128-bit Host Identifier: Not Supported 00:24:04.219 Non-Operational Permissive Mode: Not Supported 00:24:04.219 NVM Sets: Not Supported 00:24:04.219 Read Recovery Levels: Not Supported 00:24:04.219 Endurance Groups: Not Supported 00:24:04.219 Predictable Latency Mode: Not Supported 00:24:04.219 Traffic Based Keep ALive: Not Supported 00:24:04.219 Namespace Granularity: Not Supported 00:24:04.219 SQ Associations: Not Supported 00:24:04.219 UUID List: Not Supported 00:24:04.219 Multi-Domain Subsystem: Not Supported 00:24:04.219 Fixed Capacity Management: Not Supported 00:24:04.219 Variable Capacity Management: Not Supported 00:24:04.219 Delete Endurance Group: Not Supported 00:24:04.219 Delete NVM Set: Not Supported 00:24:04.219 Extended LBA Formats Supported: Not Supported 00:24:04.219 Flexible Data Placement Supported: Not Supported 00:24:04.219 00:24:04.219 Controller Memory Buffer Support 00:24:04.219 ================================ 00:24:04.219 Supported: No 00:24:04.219 00:24:04.219 Persistent Memory Region Support 00:24:04.219 ================================ 00:24:04.219 Supported: No 00:24:04.219 00:24:04.219 Admin Command Set Attributes 00:24:04.219 ============================ 00:24:04.219 Security Send/Receive: Not Supported 00:24:04.219 Format NVM: Not Supported 00:24:04.219 Firmware Activate/Download: Not Supported 00:24:04.219 Namespace Management: Not Supported 00:24:04.219 Device Self-Test: Not Supported 00:24:04.219 Directives: Not Supported 00:24:04.219 NVMe-MI: Not Supported 00:24:04.219 Virtualization Management: Not Supported 00:24:04.219 Doorbell Buffer Config: Not Supported 00:24:04.219 Get LBA Status Capability: Not Supported 00:24:04.219 Command & Feature Lockdown Capability: Not Supported 00:24:04.219 Abort Command Limit: 1 00:24:04.219 Async Event Request Limit: 4 00:24:04.219 Number of Firmware Slots: N/A 00:24:04.219 Firmware Slot 1 Read-Only: N/A 00:24:04.219 Firmware Activation Without Reset: N/A 00:24:04.219 Multiple Update Detection Support: N/A 00:24:04.219 Firmware Update Granularity: No Information Provided 00:24:04.219 Per-Namespace SMART Log: No 00:24:04.219 Asymmetric Namespace Access Log Page: Not Supported 00:24:04.219 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:04.219 Command Effects Log Page: Not Supported 00:24:04.219 Get Log Page Extended Data: Supported 00:24:04.219 Telemetry Log Pages: Not Supported 00:24:04.219 Persistent Event Log Pages: Not Supported 00:24:04.219 Supported Log Pages Log Page: May Support 00:24:04.219 Commands Supported & Effects Log Page: Not Supported 00:24:04.219 Feature Identifiers & Effects Log Page:May Support 00:24:04.219 NVMe-MI Commands & Effects Log Page: May Support 00:24:04.219 Data Area 4 for Telemetry Log: Not Supported 00:24:04.219 Error Log Page Entries Supported: 128 00:24:04.219 Keep Alive: Not Supported 00:24:04.219 00:24:04.219 NVM Command Set Attributes 00:24:04.219 ========================== 00:24:04.219 Submission Queue Entry Size 00:24:04.219 Max: 1 00:24:04.219 Min: 1 00:24:04.219 Completion Queue Entry Size 00:24:04.219 Max: 1 00:24:04.219 Min: 1 00:24:04.219 Number of Namespaces: 0 00:24:04.219 Compare Command: Not Supported 00:24:04.219 Write Uncorrectable Command: Not Supported 00:24:04.219 Dataset Management Command: Not Supported 00:24:04.219 Write Zeroes Command: Not Supported 00:24:04.219 Set Features Save Field: Not Supported 00:24:04.219 Reservations: Not Supported 00:24:04.219 Timestamp: Not Supported 00:24:04.219 Copy: Not Supported 00:24:04.219 Volatile Write Cache: Not Present 00:24:04.219 Atomic Write Unit (Normal): 1 00:24:04.219 Atomic Write Unit (PFail): 1 00:24:04.219 Atomic Compare & Write Unit: 1 00:24:04.219 Fused Compare & Write: Supported 00:24:04.219 Scatter-Gather List 00:24:04.219 SGL Command Set: Supported 00:24:04.219 SGL Keyed: Supported 00:24:04.219 SGL Bit Bucket Descriptor: Not Supported 00:24:04.219 SGL Metadata Pointer: Not Supported 00:24:04.219 Oversized SGL: Not Supported 00:24:04.219 SGL Metadata Address: Not Supported 00:24:04.219 SGL Offset: Supported 00:24:04.219 Transport SGL Data Block: Not Supported 00:24:04.219 Replay Protected Memory Block: Not Supported 00:24:04.219 00:24:04.219 Firmware Slot Information 00:24:04.219 ========================= 00:24:04.219 Active slot: 0 00:24:04.219 00:24:04.219 00:24:04.219 Error Log 00:24:04.219 ========= 00:24:04.219 00:24:04.219 Active Namespaces 00:24:04.219 ================= 00:24:04.219 Discovery Log Page 00:24:04.219 ================== 00:24:04.219 Generation Counter: 2 00:24:04.219 Number of Records: 2 00:24:04.219 Record Format: 0 00:24:04.219 00:24:04.219 Discovery Log Entry 0 00:24:04.219 ---------------------- 00:24:04.219 Transport Type: 3 (TCP) 00:24:04.219 Address Family: 1 (IPv4) 00:24:04.219 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:04.219 Entry Flags: 00:24:04.219 Duplicate Returned Information: 1 00:24:04.219 Explicit Persistent Connection Support for Discovery: 1 00:24:04.219 Transport Requirements: 00:24:04.219 Secure Channel: Not Required 00:24:04.219 Port ID: 0 (0x0000) 00:24:04.219 Controller ID: 65535 (0xffff) 00:24:04.219 Admin Max SQ Size: 128 00:24:04.219 Transport Service Identifier: 4420 00:24:04.219 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:04.219 Transport Address: 10.0.0.2 00:24:04.219 Discovery Log Entry 1 00:24:04.219 ---------------------- 00:24:04.219 Transport Type: 3 (TCP) 00:24:04.219 Address Family: 1 (IPv4) 00:24:04.219 Subsystem Type: 2 (NVM Subsystem) 00:24:04.219 Entry Flags: 00:24:04.219 Duplicate Returned Information: 0 00:24:04.219 Explicit Persistent Connection Support for Discovery: 0 00:24:04.219 Transport Requirements: 00:24:04.219 Secure Channel: Not Required 00:24:04.219 Port ID: 0 (0x0000) 00:24:04.219 Controller ID: 65535 (0xffff) 00:24:04.219 Admin Max SQ Size: 128 00:24:04.219 Transport Service Identifier: 4420 00:24:04.219 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:04.220 Transport Address: 10.0.0.2 [2024-11-06 13:20:45.882119] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:04.220 [2024-11-06 13:20:45.882131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1100) on tqpair=0x1f5f550 00:24:04.220 [2024-11-06 13:20:45.882139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.220 [2024-11-06 13:20:45.882145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1280) on tqpair=0x1f5f550 00:24:04.220 [2024-11-06 13:20:45.882150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.220 [2024-11-06 13:20:45.882155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1400) on tqpair=0x1f5f550 00:24:04.220 [2024-11-06 13:20:45.882160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.220 [2024-11-06 13:20:45.882165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1580) on tqpair=0x1f5f550 00:24:04.220 [2024-11-06 13:20:45.882169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.220 [2024-11-06 13:20:45.882182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.882187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.882191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5f550) 00:24:04.220 [2024-11-06 13:20:45.882199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.220 [2024-11-06 13:20:45.882216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1580, cid 3, qid 0 00:24:04.220 [2024-11-06 13:20:45.882436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.220 [2024-11-06 13:20:45.882443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.220 [2024-11-06 13:20:45.882447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.882450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1580) on tqpair=0x1f5f550 00:24:04.220 [2024-11-06 13:20:45.882458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.882462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.882466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5f550) 00:24:04.220 [2024-11-06 13:20:45.882472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.220 [2024-11-06 13:20:45.882486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1580, cid 3, qid 0 00:24:04.220 [2024-11-06 13:20:45.882738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.220 [2024-11-06 13:20:45.886766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.220 [2024-11-06 13:20:45.886773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.886781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1580) on tqpair=0x1f5f550 00:24:04.220 [2024-11-06 13:20:45.886787] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:04.220 [2024-11-06 13:20:45.886792] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:04.220 [2024-11-06 13:20:45.886803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.886807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.886811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f5f550) 00:24:04.220 [2024-11-06 13:20:45.886817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.220 [2024-11-06 13:20:45.886830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fc1580, cid 3, qid 0 00:24:04.220 [2024-11-06 13:20:45.887019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.220 [2024-11-06 13:20:45.887025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.220 [2024-11-06 13:20:45.887029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.887033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fc1580) on tqpair=0x1f5f550 00:24:04.220 [2024-11-06 13:20:45.887041] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 0 milliseconds 00:24:04.220 00:24:04.220 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:04.220 [2024-11-06 13:20:45.933628] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:24:04.220 [2024-11-06 13:20:45.933679] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821212 ] 00:24:04.220 [2024-11-06 13:20:45.990288] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:04.220 [2024-11-06 13:20:45.990352] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:04.220 [2024-11-06 13:20:45.990358] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:04.220 [2024-11-06 13:20:45.990375] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:04.220 [2024-11-06 13:20:45.990388] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:04.220 [2024-11-06 13:20:45.990883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:04.220 [2024-11-06 13:20:45.990920] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f98550 0 00:24:04.220 [2024-11-06 13:20:45.996765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:04.220 [2024-11-06 13:20:45.996780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:04.220 [2024-11-06 13:20:45.996784] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:04.220 [2024-11-06 13:20:45.996788] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:04.220 [2024-11-06 13:20:45.996824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.996829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:45.996833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.220 [2024-11-06 13:20:45.996852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:04.220 [2024-11-06 13:20:45.996876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.220 [2024-11-06 13:20:46.002760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.220 [2024-11-06 13:20:46.002771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.220 [2024-11-06 13:20:46.002775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.002779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.220 [2024-11-06 13:20:46.002793] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:04.220 [2024-11-06 13:20:46.002801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:04.220 [2024-11-06 13:20:46.002806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:04.220 [2024-11-06 13:20:46.002820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.002825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.002828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.220 [2024-11-06 13:20:46.002837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.220 [2024-11-06 13:20:46.002853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.220 [2024-11-06 13:20:46.002954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.220 [2024-11-06 13:20:46.002961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.220 [2024-11-06 13:20:46.002964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.002968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.220 [2024-11-06 13:20:46.002974] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:04.220 [2024-11-06 13:20:46.002981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:04.220 [2024-11-06 13:20:46.002988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.002992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.002996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.220 [2024-11-06 13:20:46.003003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.220 [2024-11-06 13:20:46.003013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.220 [2024-11-06 13:20:46.003098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.220 [2024-11-06 13:20:46.003105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.220 [2024-11-06 13:20:46.003108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.003112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.220 [2024-11-06 13:20:46.003117] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:04.220 [2024-11-06 13:20:46.003126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:04.220 [2024-11-06 13:20:46.003133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.003137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.003140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.220 [2024-11-06 13:20:46.003147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.220 [2024-11-06 13:20:46.003162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.220 [2024-11-06 13:20:46.003238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.220 [2024-11-06 13:20:46.003244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.220 [2024-11-06 13:20:46.003248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.220 [2024-11-06 13:20:46.003252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.220 [2024-11-06 13:20:46.003257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:04.221 [2024-11-06 13:20:46.003267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.003281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.221 [2024-11-06 13:20:46.003291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.221 [2024-11-06 13:20:46.003401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.221 [2024-11-06 13:20:46.003407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.221 [2024-11-06 13:20:46.003411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.221 [2024-11-06 13:20:46.003419] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:04.221 [2024-11-06 13:20:46.003425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:04.221 [2024-11-06 13:20:46.003432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:04.221 [2024-11-06 13:20:46.003541] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:04.221 [2024-11-06 13:20:46.003546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:04.221 [2024-11-06 13:20:46.003555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.003569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.221 [2024-11-06 13:20:46.003580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.221 [2024-11-06 13:20:46.003697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.221 [2024-11-06 13:20:46.003703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.221 [2024-11-06 13:20:46.003707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.221 [2024-11-06 13:20:46.003715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:04.221 [2024-11-06 13:20:46.003725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.003742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.221 [2024-11-06 13:20:46.003764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.221 [2024-11-06 13:20:46.003848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.221 [2024-11-06 13:20:46.003854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.221 [2024-11-06 13:20:46.003858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.221 [2024-11-06 13:20:46.003866] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:04.221 [2024-11-06 13:20:46.003871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:04.221 [2024-11-06 13:20:46.003879] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:04.221 [2024-11-06 13:20:46.003890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:04.221 [2024-11-06 13:20:46.003900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.003903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.003910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.221 [2024-11-06 13:20:46.003922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.221 [2024-11-06 13:20:46.004041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:04.221 [2024-11-06 13:20:46.004047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:04.221 [2024-11-06 13:20:46.004051] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004055] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f98550): datao=0, datal=4096, cccid=0 00:24:04.221 [2024-11-06 13:20:46.004060] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ffa100) on tqpair(0x1f98550): expected_datao=0, payload_size=4096 00:24:04.221 [2024-11-06 13:20:46.004064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004085] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004090] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.221 [2024-11-06 13:20:46.004206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.221 [2024-11-06 13:20:46.004209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.221 [2024-11-06 13:20:46.004221] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:04.221 [2024-11-06 13:20:46.004226] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:04.221 [2024-11-06 13:20:46.004231] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:04.221 [2024-11-06 13:20:46.004238] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:04.221 [2024-11-06 13:20:46.004243] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:04.221 [2024-11-06 13:20:46.004247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:04.221 [2024-11-06 13:20:46.004258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:04.221 [2024-11-06 13:20:46.004267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.004282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:04.221 [2024-11-06 13:20:46.004295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.221 [2024-11-06 13:20:46.004371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.221 [2024-11-06 13:20:46.004377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.221 [2024-11-06 13:20:46.004380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.221 [2024-11-06 13:20:46.004391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.004405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.221 [2024-11-06 13:20:46.004411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.004424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.221 [2024-11-06 13:20:46.004431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.004443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.221 [2024-11-06 13:20:46.004449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.004462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.221 [2024-11-06 13:20:46.004467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:04.221 [2024-11-06 13:20:46.004476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:04.221 [2024-11-06 13:20:46.004482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f98550) 00:24:04.221 [2024-11-06 13:20:46.004493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.221 [2024-11-06 13:20:46.004505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa100, cid 0, qid 0 00:24:04.221 [2024-11-06 13:20:46.004511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa280, cid 1, qid 0 00:24:04.221 [2024-11-06 13:20:46.004516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa400, cid 2, qid 0 00:24:04.221 [2024-11-06 13:20:46.004522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.221 [2024-11-06 13:20:46.004527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa700, cid 4, qid 0 00:24:04.221 [2024-11-06 13:20:46.004695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.221 [2024-11-06 13:20:46.004702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.221 [2024-11-06 13:20:46.004706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.221 [2024-11-06 13:20:46.004709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa700) on tqpair=0x1f98550 00:24:04.222 [2024-11-06 13:20:46.004717] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:04.222 [2024-11-06 13:20:46.004722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:04.222 [2024-11-06 13:20:46.004731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:04.222 [2024-11-06 13:20:46.004738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:04.222 [2024-11-06 13:20:46.004750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.004755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.004758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f98550) 00:24:04.222 [2024-11-06 13:20:46.004765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:04.222 [2024-11-06 13:20:46.004776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa700, cid 4, qid 0 00:24:04.222 [2024-11-06 13:20:46.004896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.222 [2024-11-06 13:20:46.004902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.222 [2024-11-06 13:20:46.004906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.004909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa700) on tqpair=0x1f98550 00:24:04.222 [2024-11-06 13:20:46.004976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:04.222 [2024-11-06 13:20:46.004986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:04.222 [2024-11-06 13:20:46.004994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.004997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f98550) 00:24:04.222 [2024-11-06 13:20:46.005004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.222 [2024-11-06 13:20:46.005014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa700, cid 4, qid 0 00:24:04.222 [2024-11-06 13:20:46.005105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:04.222 [2024-11-06 13:20:46.005111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:04.222 [2024-11-06 13:20:46.005114] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.005118] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f98550): datao=0, datal=4096, cccid=4 00:24:04.222 [2024-11-06 13:20:46.005123] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ffa700) on tqpair(0x1f98550): expected_datao=0, payload_size=4096 00:24:04.222 [2024-11-06 13:20:46.005127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.005145] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.005149] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.049756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.222 [2024-11-06 13:20:46.049768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.222 [2024-11-06 13:20:46.049772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.049776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa700) on tqpair=0x1f98550 00:24:04.222 [2024-11-06 13:20:46.049790] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:04.222 [2024-11-06 13:20:46.049809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:04.222 [2024-11-06 13:20:46.049819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:04.222 [2024-11-06 13:20:46.049827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.049831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f98550) 00:24:04.222 [2024-11-06 13:20:46.049839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.222 [2024-11-06 13:20:46.049852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa700, cid 4, qid 0 00:24:04.222 [2024-11-06 13:20:46.049953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:04.222 [2024-11-06 13:20:46.049960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:04.222 [2024-11-06 13:20:46.049964] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.049968] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f98550): datao=0, datal=4096, cccid=4 00:24:04.222 [2024-11-06 13:20:46.049972] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ffa700) on tqpair(0x1f98550): expected_datao=0, payload_size=4096 00:24:04.222 [2024-11-06 13:20:46.049977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.049991] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.049995] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.090800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.222 [2024-11-06 13:20:46.090812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.222 [2024-11-06 13:20:46.090815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.090820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa700) on tqpair=0x1f98550 00:24:04.222 [2024-11-06 13:20:46.090839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:04.222 [2024-11-06 13:20:46.090849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:04.222 [2024-11-06 13:20:46.090858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.090862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f98550) 00:24:04.222 [2024-11-06 13:20:46.090869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.222 [2024-11-06 13:20:46.090882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa700, cid 4, qid 0 00:24:04.222 [2024-11-06 13:20:46.090973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:04.222 [2024-11-06 13:20:46.090980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:04.222 [2024-11-06 13:20:46.090984] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.090987] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f98550): datao=0, datal=4096, cccid=4 00:24:04.222 [2024-11-06 13:20:46.090992] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ffa700) on tqpair(0x1f98550): expected_datao=0, payload_size=4096 00:24:04.222 [2024-11-06 13:20:46.091001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.091041] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:04.222 [2024-11-06 13:20:46.091045] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:04.486 [2024-11-06 13:20:46.136755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.486 [2024-11-06 13:20:46.136766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.486 [2024-11-06 13:20:46.136770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.486 [2024-11-06 13:20:46.136774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa700) on tqpair=0x1f98550 00:24:04.486 [2024-11-06 13:20:46.136785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:04.486 [2024-11-06 13:20:46.136795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:04.486 [2024-11-06 13:20:46.136806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:04.486 [2024-11-06 13:20:46.136812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:04.486 [2024-11-06 13:20:46.136817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:04.486 [2024-11-06 13:20:46.136823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:04.486 [2024-11-06 13:20:46.136829] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:04.486 [2024-11-06 13:20:46.136833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:04.486 [2024-11-06 13:20:46.136839] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:04.486 [2024-11-06 13:20:46.136857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.486 [2024-11-06 13:20:46.136861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f98550) 00:24:04.486 [2024-11-06 13:20:46.136869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.486 [2024-11-06 13:20:46.136876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.486 [2024-11-06 13:20:46.136880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.486 [2024-11-06 13:20:46.136884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f98550) 00:24:04.486 [2024-11-06 13:20:46.136890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:04.486 [2024-11-06 13:20:46.136906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa700, cid 4, qid 0 00:24:04.486 [2024-11-06 13:20:46.136911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa880, cid 5, qid 0 00:24:04.486 [2024-11-06 13:20:46.137047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.486 [2024-11-06 13:20:46.137053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.486 [2024-11-06 13:20:46.137057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.486 [2024-11-06 13:20:46.137061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa700) on tqpair=0x1f98550 00:24:04.486 [2024-11-06 13:20:46.137068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.486 [2024-11-06 13:20:46.137074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.486 [2024-11-06 13:20:46.137078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.486 [2024-11-06 13:20:46.137085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa880) on tqpair=0x1f98550 00:24:04.486 [2024-11-06 13:20:46.137095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.486 [2024-11-06 13:20:46.137099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f98550) 00:24:04.487 [2024-11-06 13:20:46.137105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.487 [2024-11-06 13:20:46.137116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa880, cid 5, qid 0 00:24:04.487 [2024-11-06 13:20:46.137191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.487 [2024-11-06 13:20:46.137198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.487 [2024-11-06 13:20:46.137201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa880) on tqpair=0x1f98550 00:24:04.487 [2024-11-06 13:20:46.137214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f98550) 00:24:04.487 [2024-11-06 13:20:46.137225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.487 [2024-11-06 13:20:46.137235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa880, cid 5, qid 0 00:24:04.487 [2024-11-06 13:20:46.137344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.487 [2024-11-06 13:20:46.137351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.487 [2024-11-06 13:20:46.137354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa880) on tqpair=0x1f98550 00:24:04.487 [2024-11-06 13:20:46.137368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f98550) 00:24:04.487 [2024-11-06 13:20:46.137378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.487 [2024-11-06 13:20:46.137388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa880, cid 5, qid 0 00:24:04.487 [2024-11-06 13:20:46.137497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.487 [2024-11-06 13:20:46.137504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.487 [2024-11-06 13:20:46.137507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa880) on tqpair=0x1f98550 00:24:04.487 [2024-11-06 13:20:46.137528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f98550) 00:24:04.487 [2024-11-06 13:20:46.137538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.487 [2024-11-06 13:20:46.137546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f98550) 00:24:04.487 [2024-11-06 13:20:46.137556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.487 [2024-11-06 13:20:46.137564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f98550) 00:24:04.487 [2024-11-06 13:20:46.137573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.487 [2024-11-06 13:20:46.137590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f98550) 00:24:04.487 [2024-11-06 13:20:46.137600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.487 [2024-11-06 13:20:46.137612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa880, cid 5, qid 0 00:24:04.487 [2024-11-06 13:20:46.137617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa700, cid 4, qid 0 00:24:04.487 [2024-11-06 13:20:46.137622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffaa00, cid 6, qid 0 00:24:04.487 [2024-11-06 13:20:46.137627] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffab80, cid 7, qid 0 00:24:04.487 [2024-11-06 13:20:46.137799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:04.487 [2024-11-06 13:20:46.137806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:04.487 [2024-11-06 13:20:46.137810] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137814] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f98550): datao=0, datal=8192, cccid=5 00:24:04.487 [2024-11-06 13:20:46.137818] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ffa880) on tqpair(0x1f98550): expected_datao=0, payload_size=8192 00:24:04.487 [2024-11-06 13:20:46.137823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137913] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:04.487 [2024-11-06 13:20:46.137929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:04.487 [2024-11-06 13:20:46.137933] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137936] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f98550): datao=0, datal=512, cccid=4 00:24:04.487 [2024-11-06 13:20:46.137941] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ffa700) on tqpair(0x1f98550): expected_datao=0, payload_size=512 00:24:04.487 [2024-11-06 13:20:46.137945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137952] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137955] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:04.487 [2024-11-06 13:20:46.137967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:04.487 [2024-11-06 13:20:46.137970] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137974] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f98550): datao=0, datal=512, cccid=6 00:24:04.487 [2024-11-06 13:20:46.137978] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ffaa00) on tqpair(0x1f98550): expected_datao=0, payload_size=512 00:24:04.487 [2024-11-06 13:20:46.137982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137989] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137992] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.137998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:04.487 [2024-11-06 13:20:46.138004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:04.487 [2024-11-06 13:20:46.138007] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.138011] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f98550): datao=0, datal=4096, cccid=7 00:24:04.487 [2024-11-06 13:20:46.138015] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ffab80) on tqpair(0x1f98550): expected_datao=0, payload_size=4096 00:24:04.487 [2024-11-06 13:20:46.138022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.138029] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.138032] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.138047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.487 [2024-11-06 13:20:46.138053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.487 [2024-11-06 13:20:46.138056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.138060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa880) on tqpair=0x1f98550 00:24:04.487 [2024-11-06 13:20:46.138073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.487 [2024-11-06 13:20:46.138079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.487 [2024-11-06 13:20:46.138082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.138086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa700) on tqpair=0x1f98550 00:24:04.487 [2024-11-06 13:20:46.138097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.487 [2024-11-06 13:20:46.138103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.487 [2024-11-06 13:20:46.138106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.138110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffaa00) on tqpair=0x1f98550 00:24:04.487 [2024-11-06 13:20:46.138117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.487 [2024-11-06 13:20:46.138123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.487 [2024-11-06 13:20:46.138127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.487 [2024-11-06 13:20:46.138131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffab80) on tqpair=0x1f98550 00:24:04.487 ===================================================== 00:24:04.487 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.487 ===================================================== 00:24:04.487 Controller Capabilities/Features 00:24:04.487 ================================ 00:24:04.487 Vendor ID: 8086 00:24:04.487 Subsystem Vendor ID: 8086 00:24:04.487 Serial Number: SPDK00000000000001 00:24:04.487 Model Number: SPDK bdev Controller 00:24:04.487 Firmware Version: 25.01 00:24:04.487 Recommended Arb Burst: 6 00:24:04.487 IEEE OUI Identifier: e4 d2 5c 00:24:04.487 Multi-path I/O 00:24:04.487 May have multiple subsystem ports: Yes 00:24:04.487 May have multiple controllers: Yes 00:24:04.487 Associated with SR-IOV VF: No 00:24:04.487 Max Data Transfer Size: 131072 00:24:04.487 Max Number of Namespaces: 32 00:24:04.487 Max Number of I/O Queues: 127 00:24:04.487 NVMe Specification Version (VS): 1.3 00:24:04.487 NVMe Specification Version (Identify): 1.3 00:24:04.487 Maximum Queue Entries: 128 00:24:04.487 Contiguous Queues Required: Yes 00:24:04.487 Arbitration Mechanisms Supported 00:24:04.487 Weighted Round Robin: Not Supported 00:24:04.487 Vendor Specific: Not Supported 00:24:04.487 Reset Timeout: 15000 ms 00:24:04.487 Doorbell Stride: 4 bytes 00:24:04.487 NVM Subsystem Reset: Not Supported 00:24:04.487 Command Sets Supported 00:24:04.487 NVM Command Set: Supported 00:24:04.488 Boot Partition: Not Supported 00:24:04.488 Memory Page Size Minimum: 4096 bytes 00:24:04.488 Memory Page Size Maximum: 4096 bytes 00:24:04.488 Persistent Memory Region: Not Supported 00:24:04.488 Optional Asynchronous Events Supported 00:24:04.488 Namespace Attribute Notices: Supported 00:24:04.488 Firmware Activation Notices: Not Supported 00:24:04.488 ANA Change Notices: Not Supported 00:24:04.488 PLE Aggregate Log Change Notices: Not Supported 00:24:04.488 LBA Status Info Alert Notices: Not Supported 00:24:04.488 EGE Aggregate Log Change Notices: Not Supported 00:24:04.488 Normal NVM Subsystem Shutdown event: Not Supported 00:24:04.488 Zone Descriptor Change Notices: Not Supported 00:24:04.488 Discovery Log Change Notices: Not Supported 00:24:04.488 Controller Attributes 00:24:04.488 128-bit Host Identifier: Supported 00:24:04.488 Non-Operational Permissive Mode: Not Supported 00:24:04.488 NVM Sets: Not Supported 00:24:04.488 Read Recovery Levels: Not Supported 00:24:04.488 Endurance Groups: Not Supported 00:24:04.488 Predictable Latency Mode: Not Supported 00:24:04.488 Traffic Based Keep ALive: Not Supported 00:24:04.488 Namespace Granularity: Not Supported 00:24:04.488 SQ Associations: Not Supported 00:24:04.488 UUID List: Not Supported 00:24:04.488 Multi-Domain Subsystem: Not Supported 00:24:04.488 Fixed Capacity Management: Not Supported 00:24:04.488 Variable Capacity Management: Not Supported 00:24:04.488 Delete Endurance Group: Not Supported 00:24:04.488 Delete NVM Set: Not Supported 00:24:04.488 Extended LBA Formats Supported: Not Supported 00:24:04.488 Flexible Data Placement Supported: Not Supported 00:24:04.488 00:24:04.488 Controller Memory Buffer Support 00:24:04.488 ================================ 00:24:04.488 Supported: No 00:24:04.488 00:24:04.488 Persistent Memory Region Support 00:24:04.488 ================================ 00:24:04.488 Supported: No 00:24:04.488 00:24:04.488 Admin Command Set Attributes 00:24:04.488 ============================ 00:24:04.488 Security Send/Receive: Not Supported 00:24:04.488 Format NVM: Not Supported 00:24:04.488 Firmware Activate/Download: Not Supported 00:24:04.488 Namespace Management: Not Supported 00:24:04.488 Device Self-Test: Not Supported 00:24:04.488 Directives: Not Supported 00:24:04.488 NVMe-MI: Not Supported 00:24:04.488 Virtualization Management: Not Supported 00:24:04.488 Doorbell Buffer Config: Not Supported 00:24:04.488 Get LBA Status Capability: Not Supported 00:24:04.488 Command & Feature Lockdown Capability: Not Supported 00:24:04.488 Abort Command Limit: 4 00:24:04.488 Async Event Request Limit: 4 00:24:04.488 Number of Firmware Slots: N/A 00:24:04.488 Firmware Slot 1 Read-Only: N/A 00:24:04.488 Firmware Activation Without Reset: N/A 00:24:04.488 Multiple Update Detection Support: N/A 00:24:04.488 Firmware Update Granularity: No Information Provided 00:24:04.488 Per-Namespace SMART Log: No 00:24:04.488 Asymmetric Namespace Access Log Page: Not Supported 00:24:04.488 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:04.488 Command Effects Log Page: Supported 00:24:04.488 Get Log Page Extended Data: Supported 00:24:04.488 Telemetry Log Pages: Not Supported 00:24:04.488 Persistent Event Log Pages: Not Supported 00:24:04.488 Supported Log Pages Log Page: May Support 00:24:04.488 Commands Supported & Effects Log Page: Not Supported 00:24:04.488 Feature Identifiers & Effects Log Page:May Support 00:24:04.488 NVMe-MI Commands & Effects Log Page: May Support 00:24:04.488 Data Area 4 for Telemetry Log: Not Supported 00:24:04.488 Error Log Page Entries Supported: 128 00:24:04.488 Keep Alive: Supported 00:24:04.488 Keep Alive Granularity: 10000 ms 00:24:04.488 00:24:04.488 NVM Command Set Attributes 00:24:04.488 ========================== 00:24:04.488 Submission Queue Entry Size 00:24:04.488 Max: 64 00:24:04.488 Min: 64 00:24:04.488 Completion Queue Entry Size 00:24:04.488 Max: 16 00:24:04.488 Min: 16 00:24:04.488 Number of Namespaces: 32 00:24:04.488 Compare Command: Supported 00:24:04.488 Write Uncorrectable Command: Not Supported 00:24:04.488 Dataset Management Command: Supported 00:24:04.488 Write Zeroes Command: Supported 00:24:04.488 Set Features Save Field: Not Supported 00:24:04.488 Reservations: Supported 00:24:04.488 Timestamp: Not Supported 00:24:04.488 Copy: Supported 00:24:04.488 Volatile Write Cache: Present 00:24:04.488 Atomic Write Unit (Normal): 1 00:24:04.488 Atomic Write Unit (PFail): 1 00:24:04.488 Atomic Compare & Write Unit: 1 00:24:04.488 Fused Compare & Write: Supported 00:24:04.488 Scatter-Gather List 00:24:04.488 SGL Command Set: Supported 00:24:04.488 SGL Keyed: Supported 00:24:04.488 SGL Bit Bucket Descriptor: Not Supported 00:24:04.488 SGL Metadata Pointer: Not Supported 00:24:04.488 Oversized SGL: Not Supported 00:24:04.488 SGL Metadata Address: Not Supported 00:24:04.488 SGL Offset: Supported 00:24:04.488 Transport SGL Data Block: Not Supported 00:24:04.488 Replay Protected Memory Block: Not Supported 00:24:04.488 00:24:04.488 Firmware Slot Information 00:24:04.488 ========================= 00:24:04.488 Active slot: 1 00:24:04.488 Slot 1 Firmware Revision: 25.01 00:24:04.488 00:24:04.488 00:24:04.488 Commands Supported and Effects 00:24:04.488 ============================== 00:24:04.488 Admin Commands 00:24:04.488 -------------- 00:24:04.488 Get Log Page (02h): Supported 00:24:04.488 Identify (06h): Supported 00:24:04.488 Abort (08h): Supported 00:24:04.488 Set Features (09h): Supported 00:24:04.488 Get Features (0Ah): Supported 00:24:04.488 Asynchronous Event Request (0Ch): Supported 00:24:04.488 Keep Alive (18h): Supported 00:24:04.488 I/O Commands 00:24:04.488 ------------ 00:24:04.488 Flush (00h): Supported LBA-Change 00:24:04.488 Write (01h): Supported LBA-Change 00:24:04.488 Read (02h): Supported 00:24:04.488 Compare (05h): Supported 00:24:04.488 Write Zeroes (08h): Supported LBA-Change 00:24:04.488 Dataset Management (09h): Supported LBA-Change 00:24:04.488 Copy (19h): Supported LBA-Change 00:24:04.488 00:24:04.488 Error Log 00:24:04.488 ========= 00:24:04.488 00:24:04.488 Arbitration 00:24:04.488 =========== 00:24:04.488 Arbitration Burst: 1 00:24:04.488 00:24:04.488 Power Management 00:24:04.488 ================ 00:24:04.488 Number of Power States: 1 00:24:04.488 Current Power State: Power State #0 00:24:04.488 Power State #0: 00:24:04.488 Max Power: 0.00 W 00:24:04.488 Non-Operational State: Operational 00:24:04.488 Entry Latency: Not Reported 00:24:04.488 Exit Latency: Not Reported 00:24:04.488 Relative Read Throughput: 0 00:24:04.488 Relative Read Latency: 0 00:24:04.488 Relative Write Throughput: 0 00:24:04.488 Relative Write Latency: 0 00:24:04.488 Idle Power: Not Reported 00:24:04.488 Active Power: Not Reported 00:24:04.488 Non-Operational Permissive Mode: Not Supported 00:24:04.488 00:24:04.488 Health Information 00:24:04.488 ================== 00:24:04.488 Critical Warnings: 00:24:04.488 Available Spare Space: OK 00:24:04.488 Temperature: OK 00:24:04.488 Device Reliability: OK 00:24:04.488 Read Only: No 00:24:04.488 Volatile Memory Backup: OK 00:24:04.488 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:04.488 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:04.488 Available Spare: 0% 00:24:04.488 Available Spare Threshold: 0% 00:24:04.488 Life Percentage Used:[2024-11-06 13:20:46.138233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.488 [2024-11-06 13:20:46.138238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f98550) 00:24:04.488 [2024-11-06 13:20:46.138245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.488 [2024-11-06 13:20:46.138257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffab80, cid 7, qid 0 00:24:04.488 [2024-11-06 13:20:46.138374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.488 [2024-11-06 13:20:46.138380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.488 [2024-11-06 13:20:46.138384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.488 [2024-11-06 13:20:46.138388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffab80) on tqpair=0x1f98550 00:24:04.488 [2024-11-06 13:20:46.138422] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:04.488 [2024-11-06 13:20:46.138432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa100) on tqpair=0x1f98550 00:24:04.488 [2024-11-06 13:20:46.138439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.488 [2024-11-06 13:20:46.138444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa280) on tqpair=0x1f98550 00:24:04.488 [2024-11-06 13:20:46.138449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.488 [2024-11-06 13:20:46.138454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa400) on tqpair=0x1f98550 00:24:04.488 [2024-11-06 13:20:46.138459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.488 [2024-11-06 13:20:46.138464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.138468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.489 [2024-11-06 13:20:46.138481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.138496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.138508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.138574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.138580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.138584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.138595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.138609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.138623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.138698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.138704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.138708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.138716] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:04.489 [2024-11-06 13:20:46.138721] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:04.489 [2024-11-06 13:20:46.138731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.138751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.138762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.138876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.138882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.138885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.138899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.138907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.138914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.138924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.139026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.139032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.139038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.139052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.139066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.139077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.139179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.139186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.139189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.139203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.139217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.139227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.139302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.139308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.139311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.139325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.139339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.139350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.139431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.139437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.139441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.139454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.139469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.139479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.139582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.139588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.139592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.139607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.139622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.139632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.139735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.139741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.139749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.139763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.139778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.139788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.139854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.139860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.139864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.139877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.489 [2024-11-06 13:20:46.139892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.489 [2024-11-06 13:20:46.139902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.489 [2024-11-06 13:20:46.139985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.489 [2024-11-06 13:20:46.139991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.489 [2024-11-06 13:20:46.139995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.139999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.489 [2024-11-06 13:20:46.140008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.489 [2024-11-06 13:20:46.140012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.490 [2024-11-06 13:20:46.140023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.490 [2024-11-06 13:20:46.140033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.490 [2024-11-06 13:20:46.140135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.490 [2024-11-06 13:20:46.140141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.490 [2024-11-06 13:20:46.140145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.490 [2024-11-06 13:20:46.140161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.490 [2024-11-06 13:20:46.140175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.490 [2024-11-06 13:20:46.140185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.490 [2024-11-06 13:20:46.140288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.490 [2024-11-06 13:20:46.140294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.490 [2024-11-06 13:20:46.140298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.490 [2024-11-06 13:20:46.140312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.490 [2024-11-06 13:20:46.140326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.490 [2024-11-06 13:20:46.140336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.490 [2024-11-06 13:20:46.140408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.490 [2024-11-06 13:20:46.140414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.490 [2024-11-06 13:20:46.140418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.490 [2024-11-06 13:20:46.140432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.490 [2024-11-06 13:20:46.140446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.490 [2024-11-06 13:20:46.140456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.490 [2024-11-06 13:20:46.140539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.490 [2024-11-06 13:20:46.140545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.490 [2024-11-06 13:20:46.140548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.490 [2024-11-06 13:20:46.140562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.490 [2024-11-06 13:20:46.140576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.490 [2024-11-06 13:20:46.140586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.490 [2024-11-06 13:20:46.140688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.490 [2024-11-06 13:20:46.140695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.490 [2024-11-06 13:20:46.140698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.490 [2024-11-06 13:20:46.140712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.140721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.490 [2024-11-06 13:20:46.140728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.490 [2024-11-06 13:20:46.140739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.490 [2024-11-06 13:20:46.144754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.490 [2024-11-06 13:20:46.144762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.490 [2024-11-06 13:20:46.144766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.144770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.490 [2024-11-06 13:20:46.144780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.144784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.144788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f98550) 00:24:04.490 [2024-11-06 13:20:46.144795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.490 [2024-11-06 13:20:46.144806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ffa580, cid 3, qid 0 00:24:04.490 [2024-11-06 13:20:46.144874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:04.490 [2024-11-06 13:20:46.144881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:04.490 [2024-11-06 13:20:46.144884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:04.490 [2024-11-06 13:20:46.144888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ffa580) on tqpair=0x1f98550 00:24:04.490 [2024-11-06 13:20:46.144896] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:24:04.490 0% 00:24:04.490 Data Units Read: 0 00:24:04.490 Data Units Written: 0 00:24:04.490 Host Read Commands: 0 00:24:04.490 Host Write Commands: 0 00:24:04.490 Controller Busy Time: 0 minutes 00:24:04.490 Power Cycles: 0 00:24:04.490 Power On Hours: 0 hours 00:24:04.490 Unsafe Shutdowns: 0 00:24:04.490 Unrecoverable Media Errors: 0 00:24:04.490 Lifetime Error Log Entries: 0 00:24:04.490 Warning Temperature Time: 0 minutes 00:24:04.490 Critical Temperature Time: 0 minutes 00:24:04.490 00:24:04.490 Number of Queues 00:24:04.490 ================ 00:24:04.490 Number of I/O Submission Queues: 127 00:24:04.490 Number of I/O Completion Queues: 127 00:24:04.490 00:24:04.490 Active Namespaces 00:24:04.490 ================= 00:24:04.490 Namespace ID:1 00:24:04.490 Error Recovery Timeout: Unlimited 00:24:04.490 Command Set Identifier: NVM (00h) 00:24:04.490 Deallocate: Supported 00:24:04.490 Deallocated/Unwritten Error: Not Supported 00:24:04.490 Deallocated Read Value: Unknown 00:24:04.490 Deallocate in Write Zeroes: Not Supported 00:24:04.490 Deallocated Guard Field: 0xFFFF 00:24:04.490 Flush: Supported 00:24:04.490 Reservation: Supported 00:24:04.490 Namespace Sharing Capabilities: Multiple Controllers 00:24:04.490 Size (in LBAs): 131072 (0GiB) 00:24:04.490 Capacity (in LBAs): 131072 (0GiB) 00:24:04.490 Utilization (in LBAs): 131072 (0GiB) 00:24:04.490 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:04.490 EUI64: ABCDEF0123456789 00:24:04.490 UUID: 9acacab9-22b8-4c88-ac94-6cd75e5a2733 00:24:04.490 Thin Provisioning: Not Supported 00:24:04.490 Per-NS Atomic Units: Yes 00:24:04.490 Atomic Boundary Size (Normal): 0 00:24:04.490 Atomic Boundary Size (PFail): 0 00:24:04.490 Atomic Boundary Offset: 0 00:24:04.490 Maximum Single Source Range Length: 65535 00:24:04.490 Maximum Copy Length: 65535 00:24:04.490 Maximum Source Range Count: 1 00:24:04.490 NGUID/EUI64 Never Reused: No 00:24:04.490 Namespace Write Protected: No 00:24:04.490 Number of LBA Formats: 1 00:24:04.490 Current LBA Format: LBA Format #00 00:24:04.490 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:04.490 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.490 rmmod nvme_tcp 00:24:04.490 rmmod nvme_fabrics 00:24:04.490 rmmod nvme_keyring 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1820960 ']' 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1820960 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 1820960 ']' 00:24:04.490 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 1820960 00:24:04.491 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:04.491 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:04.491 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1820960 00:24:04.491 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:04.491 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:04.491 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1820960' 00:24:04.491 killing process with pid 1820960 00:24:04.491 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 1820960 00:24:04.491 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 1820960 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.751 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.299 13:20:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:07.299 00:24:07.299 real 0m11.874s 00:24:07.299 user 0m8.876s 00:24:07.299 sys 0m6.325s 00:24:07.299 13:20:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:07.300 ************************************ 00:24:07.300 END TEST nvmf_identify 00:24:07.300 ************************************ 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.300 ************************************ 00:24:07.300 START TEST nvmf_perf 00:24:07.300 ************************************ 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:07.300 * Looking for test storage... 00:24:07.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:07.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.300 --rc genhtml_branch_coverage=1 00:24:07.300 --rc genhtml_function_coverage=1 00:24:07.300 --rc genhtml_legend=1 00:24:07.300 --rc geninfo_all_blocks=1 00:24:07.300 --rc geninfo_unexecuted_blocks=1 00:24:07.300 00:24:07.300 ' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:07.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.300 --rc genhtml_branch_coverage=1 00:24:07.300 --rc genhtml_function_coverage=1 00:24:07.300 --rc genhtml_legend=1 00:24:07.300 --rc geninfo_all_blocks=1 00:24:07.300 --rc geninfo_unexecuted_blocks=1 00:24:07.300 00:24:07.300 ' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:07.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.300 --rc genhtml_branch_coverage=1 00:24:07.300 --rc genhtml_function_coverage=1 00:24:07.300 --rc genhtml_legend=1 00:24:07.300 --rc geninfo_all_blocks=1 00:24:07.300 --rc geninfo_unexecuted_blocks=1 00:24:07.300 00:24:07.300 ' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:07.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.300 --rc genhtml_branch_coverage=1 00:24:07.300 --rc genhtml_function_coverage=1 00:24:07.300 --rc genhtml_legend=1 00:24:07.300 --rc geninfo_all_blocks=1 00:24:07.300 --rc geninfo_unexecuted_blocks=1 00:24:07.300 00:24:07.300 ' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:07.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:07.300 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:07.301 13:20:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:15.478 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:15.478 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.478 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:15.478 Found net devices under 0000:31:00.0: cvl_0_0 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:15.479 Found net devices under 0000:31:00.1: cvl_0_1 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:15.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:24:15.479 00:24:15.479 --- 10.0.0.2 ping statistics --- 00:24:15.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.479 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:24:15.479 00:24:15.479 --- 10.0.0.1 ping statistics --- 00:24:15.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.479 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1825456 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1825456 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 1825456 ']' 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:15.479 13:20:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.479 [2024-11-06 13:20:56.608281] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:24:15.479 [2024-11-06 13:20:56.608348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.479 [2024-11-06 13:20:56.708854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:15.479 [2024-11-06 13:20:56.762854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.479 [2024-11-06 13:20:56.762906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.479 [2024-11-06 13:20:56.762915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.479 [2024-11-06 13:20:56.762922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.479 [2024-11-06 13:20:56.762928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.479 [2024-11-06 13:20:56.765143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.479 [2024-11-06 13:20:56.765304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.479 [2024-11-06 13:20:56.765464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.479 [2024-11-06 13:20:56.765465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.802 13:20:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:15.802 13:20:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:15.802 13:20:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.802 13:20:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.802 13:20:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.802 13:20:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.802 13:20:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:15.802 13:20:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:16.432 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:16.432 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:16.432 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:16.432 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:16.694 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:16.694 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:16.694 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:16.694 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:16.694 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:16.954 [2024-11-06 13:20:58.611046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.954 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.954 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:16.954 13:20:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.215 13:20:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:17.215 13:20:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:17.476 13:20:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.476 [2024-11-06 13:20:59.369884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.736 13:20:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.736 13:20:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:17.736 13:20:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:17.736 13:20:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:17.736 13:20:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:19.119 Initializing NVMe Controllers 00:24:19.119 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:19.119 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:19.119 Initialization complete. Launching workers. 00:24:19.119 ======================================================== 00:24:19.119 Latency(us) 00:24:19.119 Device Information : IOPS MiB/s Average min max 00:24:19.119 PCIE (0000:65:00.0) NSID 1 from core 0: 79242.08 309.54 404.38 14.68 5180.48 00:24:19.119 ======================================================== 00:24:19.119 Total : 79242.08 309.54 404.38 14.68 5180.48 00:24:19.119 00:24:19.119 13:21:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.501 Initializing NVMe Controllers 00:24:20.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:20.501 Initialization complete. Launching workers. 00:24:20.501 ======================================================== 00:24:20.501 Latency(us) 00:24:20.501 Device Information : IOPS MiB/s Average min max 00:24:20.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 99.00 0.39 10432.96 149.00 46248.56 00:24:20.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 86.00 0.34 11692.94 7954.17 47890.17 00:24:20.502 ======================================================== 00:24:20.502 Total : 185.00 0.72 11018.68 149.00 47890.17 00:24:20.502 00:24:20.502 13:21:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:21.886 Initializing NVMe Controllers 00:24:21.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:21.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:21.886 Initialization complete. Launching workers. 00:24:21.886 ======================================================== 00:24:21.886 Latency(us) 00:24:21.886 Device Information : IOPS MiB/s Average min max 00:24:21.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11664.99 45.57 2743.97 411.75 6346.08 00:24:21.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3801.00 14.85 8465.21 6521.02 16015.76 00:24:21.886 ======================================================== 00:24:21.886 Total : 15465.98 60.41 4150.05 411.75 16015.76 00:24:21.886 00:24:21.886 13:21:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:21.886 13:21:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:21.886 13:21:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:24.432 Initializing NVMe Controllers 00:24:24.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.432 Controller IO queue size 128, less than required. 00:24:24.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.432 Controller IO queue size 128, less than required. 00:24:24.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:24.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:24.432 Initialization complete. Launching workers. 00:24:24.432 ======================================================== 00:24:24.432 Latency(us) 00:24:24.432 Device Information : IOPS MiB/s Average min max 00:24:24.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1816.49 454.12 71424.53 41676.07 108460.72 00:24:24.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 575.50 143.87 234708.77 71938.47 338220.12 00:24:24.432 ======================================================== 00:24:24.432 Total : 2391.99 598.00 110709.68 41676.07 338220.12 00:24:24.432 00:24:24.432 13:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:24.432 No valid NVMe controllers or AIO or URING devices found 00:24:24.432 Initializing NVMe Controllers 00:24:24.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.432 Controller IO queue size 128, less than required. 00:24:24.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.432 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:24.432 Controller IO queue size 128, less than required. 00:24:24.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:24.432 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:24.432 WARNING: Some requested NVMe devices were skipped 00:24:24.432 13:21:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:26.977 Initializing NVMe Controllers 00:24:26.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.977 Controller IO queue size 128, less than required. 00:24:26.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.977 Controller IO queue size 128, less than required. 00:24:26.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:26.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.977 Initialization complete. Launching workers. 00:24:26.977 00:24:26.977 ==================== 00:24:26.977 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:26.977 TCP transport: 00:24:26.977 polls: 37031 00:24:26.977 idle_polls: 21311 00:24:26.977 sock_completions: 15720 00:24:26.977 nvme_completions: 9073 00:24:26.977 submitted_requests: 13636 00:24:26.977 queued_requests: 1 00:24:26.977 00:24:26.977 ==================== 00:24:26.977 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:26.977 TCP transport: 00:24:26.977 polls: 32615 00:24:26.977 idle_polls: 19773 00:24:26.977 sock_completions: 12842 00:24:26.977 nvme_completions: 6835 00:24:26.977 submitted_requests: 10386 00:24:26.977 queued_requests: 1 00:24:26.977 ======================================================== 00:24:26.977 Latency(us) 00:24:26.977 Device Information : IOPS MiB/s Average min max 00:24:26.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2265.37 566.34 57395.64 34719.10 103150.49 00:24:26.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1706.52 426.63 75381.25 32308.98 129483.19 00:24:26.977 ======================================================== 00:24:26.977 Total : 3971.89 992.97 65123.15 32308.98 129483.19 00:24:26.977 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:26.977 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.239 rmmod nvme_tcp 00:24:27.239 rmmod nvme_fabrics 00:24:27.239 rmmod nvme_keyring 00:24:27.239 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.239 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:27.239 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:27.239 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1825456 ']' 00:24:27.239 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1825456 00:24:27.239 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 1825456 ']' 00:24:27.240 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 1825456 00:24:27.240 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:24:27.240 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:27.240 13:21:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1825456 00:24:27.240 13:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:27.240 13:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:27.240 13:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1825456' 00:24:27.240 killing process with pid 1825456 00:24:27.240 13:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 1825456 00:24:27.240 13:21:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 1825456 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.149 13:21:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.691 00:24:31.691 real 0m24.379s 00:24:31.691 user 0m58.036s 00:24:31.691 sys 0m8.852s 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.691 ************************************ 00:24:31.691 END TEST nvmf_perf 00:24:31.691 ************************************ 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.691 ************************************ 00:24:31.691 START TEST nvmf_fio_host 00:24:31.691 ************************************ 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:31.691 * Looking for test storage... 00:24:31.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.691 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:31.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.692 --rc genhtml_branch_coverage=1 00:24:31.692 --rc genhtml_function_coverage=1 00:24:31.692 --rc genhtml_legend=1 00:24:31.692 --rc geninfo_all_blocks=1 00:24:31.692 --rc geninfo_unexecuted_blocks=1 00:24:31.692 00:24:31.692 ' 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:31.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.692 --rc genhtml_branch_coverage=1 00:24:31.692 --rc genhtml_function_coverage=1 00:24:31.692 --rc genhtml_legend=1 00:24:31.692 --rc geninfo_all_blocks=1 00:24:31.692 --rc geninfo_unexecuted_blocks=1 00:24:31.692 00:24:31.692 ' 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:31.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.692 --rc genhtml_branch_coverage=1 00:24:31.692 --rc genhtml_function_coverage=1 00:24:31.692 --rc genhtml_legend=1 00:24:31.692 --rc geninfo_all_blocks=1 00:24:31.692 --rc geninfo_unexecuted_blocks=1 00:24:31.692 00:24:31.692 ' 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:31.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.692 --rc genhtml_branch_coverage=1 00:24:31.692 --rc genhtml_function_coverage=1 00:24:31.692 --rc genhtml_legend=1 00:24:31.692 --rc geninfo_all_blocks=1 00:24:31.692 --rc geninfo_unexecuted_blocks=1 00:24:31.692 00:24:31.692 ' 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.692 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.693 13:21:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:39.826 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:39.826 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:39.826 Found net devices under 0000:31:00.0: cvl_0_0 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.826 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:39.826 Found net devices under 0000:31:00.1: cvl_0_1 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:24:39.827 00:24:39.827 --- 10.0.0.2 ping statistics --- 00:24:39.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.827 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:24:39.827 00:24:39.827 --- 10.0.0.1 ping statistics --- 00:24:39.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.827 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1833012 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1833012 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 1833012 ']' 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:39.827 13:21:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.827 [2024-11-06 13:21:21.000303] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:24:39.827 [2024-11-06 13:21:21.000366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.827 [2024-11-06 13:21:21.099713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:39.827 [2024-11-06 13:21:21.152846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.827 [2024-11-06 13:21:21.152898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.827 [2024-11-06 13:21:21.152907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.827 [2024-11-06 13:21:21.152914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.827 [2024-11-06 13:21:21.152920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.827 [2024-11-06 13:21:21.154939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.827 [2024-11-06 13:21:21.155099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.827 [2024-11-06 13:21:21.155260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.827 [2024-11-06 13:21:21.155261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.088 13:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:40.088 13:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:24:40.088 13:21:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:40.348 [2024-11-06 13:21:22.000559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.348 13:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:40.348 13:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.348 13:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.348 13:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:40.609 Malloc1 00:24:40.609 13:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.609 13:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.869 13:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.130 [2024-11-06 13:21:22.849604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.130 13:21:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:41.391 13:21:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:41.652 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:41.652 fio-3.35 00:24:41.652 Starting 1 thread 00:24:44.191 00:24:44.191 test: (groupid=0, jobs=1): err= 0: pid=1833832: Wed Nov 6 13:21:25 2024 00:24:44.191 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(90.9MiB/2004msec) 00:24:44.191 slat (usec): min=2, max=230, avg= 2.14, stdev= 2.16 00:24:44.191 clat (usec): min=3109, max=10492, avg=6073.91, stdev=1236.07 00:24:44.191 lat (usec): min=3145, max=10494, avg=6076.05, stdev=1236.08 00:24:44.191 clat percentiles (usec): 00:24:44.191 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:24:44.191 | 30.00th=[ 5276], 40.00th=[ 5407], 50.00th=[ 5538], 60.00th=[ 5735], 00:24:44.191 | 70.00th=[ 6063], 80.00th=[ 7635], 90.00th=[ 8094], 95.00th=[ 8455], 00:24:44.191 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[ 9503], 99.95th=[ 9634], 00:24:44.191 | 99.99th=[10028] 00:24:44.191 bw ( KiB/s): min=33888, max=53432, per=99.87%, avg=46396.00, stdev=9065.68, samples=4 00:24:44.191 iops : min= 8472, max=13358, avg=11599.00, stdev=2266.42, samples=4 00:24:44.191 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.3MiB/2004msec); 0 zone resets 00:24:44.191 slat (usec): min=2, max=235, avg= 2.21, stdev= 1.69 00:24:44.191 clat (usec): min=2406, max=8569, avg=4908.09, stdev=1002.53 00:24:44.191 lat (usec): min=2424, max=8571, avg=4910.30, stdev=1002.57 00:24:44.191 clat percentiles (usec): 00:24:44.191 | 1.00th=[ 3654], 5.00th=[ 3884], 10.00th=[ 4015], 20.00th=[ 4146], 00:24:44.191 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4621], 00:24:44.191 | 70.00th=[ 4883], 80.00th=[ 6128], 90.00th=[ 6587], 95.00th=[ 6849], 00:24:44.191 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 7767], 99.95th=[ 7832], 00:24:44.191 | 99.99th=[ 8455] 00:24:44.191 bw ( KiB/s): min=34848, max=52880, per=99.98%, avg=46116.00, stdev=8394.02, samples=4 00:24:44.191 iops : min= 8712, max=13220, avg=11529.00, stdev=2098.51, samples=4 00:24:44.191 lat (msec) : 4=4.69%, 10=95.30%, 20=0.01% 00:24:44.191 cpu : usr=73.34%, sys=25.56%, ctx=33, majf=0, minf=17 00:24:44.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:44.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:44.191 issued rwts: total=23275,23108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.191 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:44.191 00:24:44.191 Run status group 0 (all jobs): 00:24:44.191 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=90.9MiB (95.3MB), run=2004-2004msec 00:24:44.191 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.3MiB (94.7MB), run=2004-2004msec 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:44.191 13:21:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:44.451 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:44.451 fio-3.35 00:24:44.451 Starting 1 thread 00:24:46.990 00:24:46.990 test: (groupid=0, jobs=1): err= 0: pid=1834370: Wed Nov 6 13:21:28 2024 00:24:46.990 read: IOPS=9612, BW=150MiB/s (157MB/s)(301MiB/2004msec) 00:24:46.990 slat (usec): min=3, max=110, avg= 3.62, stdev= 1.60 00:24:46.990 clat (usec): min=2053, max=15151, avg=8111.13, stdev=1984.34 00:24:46.990 lat (usec): min=2057, max=15155, avg=8114.75, stdev=1984.48 00:24:46.990 clat percentiles (usec): 00:24:46.990 | 1.00th=[ 4146], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6325], 00:24:46.990 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 7963], 60.00th=[ 8586], 00:24:46.990 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10683], 95.00th=[11469], 00:24:46.990 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14484], 99.95th=[14615], 00:24:46.990 | 99.99th=[15139] 00:24:46.990 bw ( KiB/s): min=72064, max=82139, per=49.23%, avg=75710.75, stdev=4601.00, samples=4 00:24:46.990 iops : min= 4504, max= 5133, avg=4732.25, stdev=287.27, samples=4 00:24:46.990 write: IOPS=5592, BW=87.4MiB/s (91.6MB/s)(155MiB/1772msec); 0 zone resets 00:24:46.990 slat (usec): min=39, max=456, avg=41.01, stdev= 8.70 00:24:46.990 clat (usec): min=1831, max=15770, avg=9049.24, stdev=1391.98 00:24:46.990 lat (usec): min=1871, max=15903, avg=9090.26, stdev=1394.14 00:24:46.990 clat percentiles (usec): 00:24:46.990 | 1.00th=[ 6390], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 7898], 00:24:46.990 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:24:46.990 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:24:46.990 | 99.00th=[12518], 99.50th=[13173], 99.90th=[15533], 99.95th=[15664], 00:24:46.990 | 99.99th=[15795] 00:24:46.990 bw ( KiB/s): min=74880, max=85844, per=88.19%, avg=78909.00, stdev=4913.41, samples=4 00:24:46.990 iops : min= 4680, max= 5365, avg=4931.75, stdev=306.97, samples=4 00:24:46.990 lat (msec) : 2=0.01%, 4=0.62%, 10=78.41%, 20=20.95% 00:24:46.990 cpu : usr=86.07%, sys=12.58%, ctx=14, majf=0, minf=33 00:24:46.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:46.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:46.990 issued rwts: total=19263,9910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.990 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:46.990 00:24:46.990 Run status group 0 (all jobs): 00:24:46.990 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=301MiB (316MB), run=2004-2004msec 00:24:46.990 WRITE: bw=87.4MiB/s (91.6MB/s), 87.4MiB/s-87.4MiB/s (91.6MB/s-91.6MB/s), io=155MiB (162MB), run=1772-1772msec 00:24:46.990 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.990 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:46.990 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:46.990 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:46.990 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.991 rmmod nvme_tcp 00:24:46.991 rmmod nvme_fabrics 00:24:46.991 rmmod nvme_keyring 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1833012 ']' 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1833012 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 1833012 ']' 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 1833012 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1833012 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1833012' 00:24:46.991 killing process with pid 1833012 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 1833012 00:24:46.991 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 1833012 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.250 13:21:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.158 13:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.158 00:24:49.158 real 0m17.867s 00:24:49.158 user 0m58.184s 00:24:49.158 sys 0m7.824s 00:24:49.158 13:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:49.158 13:21:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.158 ************************************ 00:24:49.158 END TEST nvmf_fio_host 00:24:49.158 ************************************ 00:24:49.158 13:21:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:49.158 13:21:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:49.158 13:21:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:49.158 13:21:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.419 ************************************ 00:24:49.419 START TEST nvmf_failover 00:24:49.419 ************************************ 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:49.419 * Looking for test storage... 00:24:49.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:49.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.419 --rc genhtml_branch_coverage=1 00:24:49.419 --rc genhtml_function_coverage=1 00:24:49.419 --rc genhtml_legend=1 00:24:49.419 --rc geninfo_all_blocks=1 00:24:49.419 --rc geninfo_unexecuted_blocks=1 00:24:49.419 00:24:49.419 ' 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:49.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.419 --rc genhtml_branch_coverage=1 00:24:49.419 --rc genhtml_function_coverage=1 00:24:49.419 --rc genhtml_legend=1 00:24:49.419 --rc geninfo_all_blocks=1 00:24:49.419 --rc geninfo_unexecuted_blocks=1 00:24:49.419 00:24:49.419 ' 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:49.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.419 --rc genhtml_branch_coverage=1 00:24:49.419 --rc genhtml_function_coverage=1 00:24:49.419 --rc genhtml_legend=1 00:24:49.419 --rc geninfo_all_blocks=1 00:24:49.419 --rc geninfo_unexecuted_blocks=1 00:24:49.419 00:24:49.419 ' 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:49.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.419 --rc genhtml_branch_coverage=1 00:24:49.419 --rc genhtml_function_coverage=1 00:24:49.419 --rc genhtml_legend=1 00:24:49.419 --rc geninfo_all_blocks=1 00:24:49.419 --rc geninfo_unexecuted_blocks=1 00:24:49.419 00:24:49.419 ' 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.419 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.420 13:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.553 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:57.554 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:57.554 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:57.554 Found net devices under 0000:31:00.0: cvl_0_0 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.554 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:57.555 Found net devices under 0000:31:00.1: cvl_0_1 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:24:57.555 00:24:57.555 --- 10.0.0.2 ping statistics --- 00:24:57.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.555 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:24:57.555 00:24:57.555 --- 10.0.0.1 ping statistics --- 00:24:57.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.555 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.555 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1839062 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1839062 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1839062 ']' 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:57.556 13:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.556 [2024-11-06 13:21:38.968919] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:24:57.556 [2024-11-06 13:21:38.968982] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.556 [2024-11-06 13:21:39.070979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:57.556 [2024-11-06 13:21:39.122943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.556 [2024-11-06 13:21:39.122992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.556 [2024-11-06 13:21:39.123001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.556 [2024-11-06 13:21:39.123008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.556 [2024-11-06 13:21:39.123014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.556 [2024-11-06 13:21:39.125079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.556 [2024-11-06 13:21:39.125241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.556 [2024-11-06 13:21:39.125241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.127 13:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:58.127 13:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:58.127 13:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.127 13:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.127 13:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.127 13:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.127 13:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:58.127 [2024-11-06 13:21:40.009553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.387 13:21:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:58.387 Malloc0 00:24:58.387 13:21:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.647 13:21:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.908 13:21:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.168 [2024-11-06 13:21:40.886204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.168 13:21:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:59.428 [2024-11-06 13:21:41.082837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:59.428 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:59.428 [2024-11-06 13:21:41.283490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1839636 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1839636 /var/tmp/bdevperf.sock 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1839636 ']' 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:59.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:59.429 13:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.368 13:21:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:00.368 13:21:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:00.368 13:21:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.937 NVMe0n1 00:25:00.937 13:21:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:00.937 00:25:00.937 13:21:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1839845 00:25:00.937 13:21:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.937 13:21:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:02.320 13:21:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.320 [2024-11-06 13:21:43.976399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.320 [2024-11-06 13:21:43.976495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 [2024-11-06 13:21:43.976618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c106d0 is same with the state(6) to be set 00:25:02.321 13:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:05.619 13:21:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:05.619 00:25:05.619 13:21:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:05.619 [2024-11-06 13:21:47.471889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.471998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 [2024-11-06 13:21:47.472131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c11520 is same with the state(6) to be set 00:25:05.619 13:21:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:08.913 13:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.913 [2024-11-06 13:21:50.660559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.913 13:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:09.854 13:21:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:10.114 [2024-11-06 13:21:51.855708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.114 [2024-11-06 13:21:51.855751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.114 [2024-11-06 13:21:51.855757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.114 [2024-11-06 13:21:51.855762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.114 [2024-11-06 13:21:51.855767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.114 [2024-11-06 13:21:51.855772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.114 [2024-11-06 13:21:51.855777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.114 [2024-11-06 13:21:51.855781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.114 [2024-11-06 13:21:51.855786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 [2024-11-06 13:21:51.855901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c7a0 is same with the state(6) to be set 00:25:10.115 13:21:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1839845 00:25:16.707 { 00:25:16.707 "results": [ 00:25:16.707 { 00:25:16.707 "job": "NVMe0n1", 00:25:16.707 "core_mask": "0x1", 00:25:16.707 "workload": "verify", 00:25:16.707 "status": "finished", 00:25:16.707 "verify_range": { 00:25:16.707 "start": 0, 00:25:16.707 "length": 16384 00:25:16.707 }, 00:25:16.707 "queue_depth": 128, 00:25:16.707 "io_size": 4096, 00:25:16.707 "runtime": 15.008647, 00:25:16.707 "iops": 12359.94157234826, 00:25:16.707 "mibps": 48.28102176698539, 00:25:16.707 "io_failed": 8333, 00:25:16.707 "io_timeout": 0, 00:25:16.707 "avg_latency_us": 9889.241942299193, 00:25:16.707 "min_latency_us": 535.8933333333333, 00:25:16.707 "max_latency_us": 31238.826666666668 00:25:16.707 } 00:25:16.707 ], 00:25:16.707 "core_count": 1 00:25:16.707 } 00:25:16.707 13:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1839636 00:25:16.707 13:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1839636 ']' 00:25:16.707 13:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1839636 00:25:16.707 13:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:16.707 13:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:16.707 13:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1839636 00:25:16.707 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:16.707 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:16.707 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1839636' 00:25:16.707 killing process with pid 1839636 00:25:16.707 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1839636 00:25:16.707 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1839636 00:25:16.707 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.707 [2024-11-06 13:21:41.365312] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:25:16.707 [2024-11-06 13:21:41.365393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839636 ] 00:25:16.707 [2024-11-06 13:21:41.461800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.707 [2024-11-06 13:21:41.515048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.707 Running I/O for 15 seconds... 00:25:16.707 11105.00 IOPS, 43.38 MiB/s [2024-11-06T12:21:58.609Z] [2024-11-06 13:21:43.980508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.707 [2024-11-06 13:21:43.980546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.707 [2024-11-06 13:21:43.980812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.707 [2024-11-06 13:21:43.980820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.980986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.980993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.708 [2024-11-06 13:21:43.981487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.708 [2024-11-06 13:21:43.981496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.709 [2024-11-06 13:21:43.981504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.709 [2024-11-06 13:21:43.981521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.709 [2024-11-06 13:21:43.981538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.709 [2024-11-06 13:21:43.981554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.709 [2024-11-06 13:21:43.981570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.709 [2024-11-06 13:21:43.981587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.709 [2024-11-06 13:21:43.981605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.709 [2024-11-06 13:21:43.981621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.709 [2024-11-06 13:21:43.981638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.981666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.981674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.709 [2024-11-06 13:21:43.981721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.709 [2024-11-06 13:21:43.981737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.709 [2024-11-06 13:21:43.981758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.709 [2024-11-06 13:21:43.981774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb14fc0 is same with the state(6) to be set 00:25:16.709 [2024-11-06 13:21:43.981932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.981940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.981947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96536 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.981954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.981969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.981975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96544 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.981982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.981990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.981996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96552 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96560 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96568 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96584 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.709 [2024-11-06 13:21:43.982322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.709 [2024-11-06 13:21:43.982327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.709 [2024-11-06 13:21:43.982333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:25:16.709 [2024-11-06 13:21:43.982341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96760 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96784 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96792 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96800 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96808 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96816 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96824 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.710 [2024-11-06 13:21:43.982956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96832 len:8 PRP1 0x0 PRP2 0x0 00:25:16.710 [2024-11-06 13:21:43.982963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.710 [2024-11-06 13:21:43.982971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.710 [2024-11-06 13:21:43.982977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.982983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96840 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.982990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.982998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.983003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.983010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96848 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.983017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.983024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.983030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.983036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96856 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.983043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.983051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.983058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.983064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96864 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.983071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.983079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.983084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.983090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96872 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.983097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.983105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.983110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.983116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96880 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.983123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.983131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96888 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96896 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96904 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96912 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96944 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.711 [2024-11-06 13:21:43.994421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.711 [2024-11-06 13:21:43.994427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:25:16.711 [2024-11-06 13:21:43.994434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.711 [2024-11-06 13:21:43.994442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.994974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.994982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.994988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.994994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96136 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.995001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.995009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.995014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.995021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96144 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.995029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.995037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.995043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.712 [2024-11-06 13:21:43.995049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96152 len:8 PRP1 0x0 PRP2 0x0 00:25:16.712 [2024-11-06 13:21:43.995056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.712 [2024-11-06 13:21:43.995064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.712 [2024-11-06 13:21:43.995070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96160 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96168 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96176 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96184 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96192 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96200 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96216 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96224 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96240 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96264 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96272 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96280 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96288 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96296 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96304 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96312 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96320 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96328 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96336 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:43.995684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.713 [2024-11-06 13:21:43.995690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.713 [2024-11-06 13:21:43.995696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96344 len:8 PRP1 0x0 PRP2 0x0 00:25:16.713 [2024-11-06 13:21:43.995703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.713 [2024-11-06 13:21:44.004144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96352 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96360 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96368 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96384 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96392 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96400 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96408 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96416 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96424 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96432 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96440 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96448 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96456 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96464 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96472 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96480 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96488 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96496 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96504 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96512 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96520 len:8 PRP1 0x0 PRP2 0x0 00:25:16.714 [2024-11-06 13:21:44.004793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.714 [2024-11-06 13:21:44.004801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.714 [2024-11-06 13:21:44.004807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.714 [2024-11-06 13:21:44.004813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:25:16.715 [2024-11-06 13:21:44.004821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:44.004869] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:16.715 [2024-11-06 13:21:44.004880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:16.715 [2024-11-06 13:21:44.004928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb14fc0 (9): Bad file descriptor 00:25:16.715 [2024-11-06 13:21:44.008557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:16.715 [2024-11-06 13:21:44.081396] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:16.715 10647.00 IOPS, 41.59 MiB/s [2024-11-06T12:21:58.617Z] 10840.33 IOPS, 42.35 MiB/s [2024-11-06T12:21:58.617Z] 10985.75 IOPS, 42.91 MiB/s [2024-11-06T12:21:58.617Z] [2024-11-06 13:21:47.472557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.472991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.472996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.715 [2024-11-06 13:21:47.473003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.715 [2024-11-06 13:21:47.473008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.716 [2024-11-06 13:21:47.473256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.716 [2024-11-06 13:21:47.473410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.716 [2024-11-06 13:21:47.473416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.717 [2024-11-06 13:21:47.473821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.717 [2024-11-06 13:21:47.473846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40584 len:8 PRP1 0x0 PRP2 0x0 00:25:16.717 [2024-11-06 13:21:47.473851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.717 [2024-11-06 13:21:47.473863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.717 [2024-11-06 13:21:47.473867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40592 len:8 PRP1 0x0 PRP2 0x0 00:25:16.717 [2024-11-06 13:21:47.473873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.717 [2024-11-06 13:21:47.473882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.717 [2024-11-06 13:21:47.473888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40600 len:8 PRP1 0x0 PRP2 0x0 00:25:16.717 [2024-11-06 13:21:47.473893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.717 [2024-11-06 13:21:47.473898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.717 [2024-11-06 13:21:47.473903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.473907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40608 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.473911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.473920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.473924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.473929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40616 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.473933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.473939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.473943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.473947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40624 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.473952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.473957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.473961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.473965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40632 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.473970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.473975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.473979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.473983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40640 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.473988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.473993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.473998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40648 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.474012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.474016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40656 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.474031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.474036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40664 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.474051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.474055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40672 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.474070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.474073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40680 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.474089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.474093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40688 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.474108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.474112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40696 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.474126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.474130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40704 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.474145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.474149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40712 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.474163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.474167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.474171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40720 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.474176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.486120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.486142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.486150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40728 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.486158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.486164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.486168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.486172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40736 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.486177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.486183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.486187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.486192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40744 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.486197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.486203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.486207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.486211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40752 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.486216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.486221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.486225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.486229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40760 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.486234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.486239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.718 [2024-11-06 13:21:47.486244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.718 [2024-11-06 13:21:47.486248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40192 len:8 PRP1 0x0 PRP2 0x0 00:25:16.718 [2024-11-06 13:21:47.486253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.486288] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:16.718 [2024-11-06 13:21:47.486310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.718 [2024-11-06 13:21:47.486316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.718 [2024-11-06 13:21:47.486323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.718 [2024-11-06 13:21:47.486329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:47.486334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.719 [2024-11-06 13:21:47.486344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:47.486349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.719 [2024-11-06 13:21:47.486355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:47.486360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:16.719 [2024-11-06 13:21:47.486393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb14fc0 (9): Bad file descriptor 00:25:16.719 [2024-11-06 13:21:47.488829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:16.719 [2024-11-06 13:21:47.553208] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:16.719 11196.80 IOPS, 43.74 MiB/s [2024-11-06T12:21:58.621Z] 11497.50 IOPS, 44.91 MiB/s [2024-11-06T12:21:58.621Z] 11728.14 IOPS, 45.81 MiB/s [2024-11-06T12:21:58.621Z] 11883.00 IOPS, 46.42 MiB/s [2024-11-06T12:21:58.621Z] [2024-11-06 13:21:51.857449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.719 [2024-11-06 13:21:51.857875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.719 [2024-11-06 13:21:51.857880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.720 [2024-11-06 13:21:51.857893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.857906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.857917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.857929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.857940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.857952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.857963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.857975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.857986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.857993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.857998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.720 [2024-11-06 13:21:51.858289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.720 [2024-11-06 13:21:51.858296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.721 [2024-11-06 13:21:51.858654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.721 [2024-11-06 13:21:51.858678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124288 len:8 PRP1 0x0 PRP2 0x0 00:25:16.721 [2024-11-06 13:21:51.858683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.721 [2024-11-06 13:21:51.858695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.721 [2024-11-06 13:21:51.858699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124296 len:8 PRP1 0x0 PRP2 0x0 00:25:16.721 [2024-11-06 13:21:51.858704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.721 [2024-11-06 13:21:51.858714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.721 [2024-11-06 13:21:51.858718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124304 len:8 PRP1 0x0 PRP2 0x0 00:25:16.721 [2024-11-06 13:21:51.858723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.721 [2024-11-06 13:21:51.858732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.721 [2024-11-06 13:21:51.858736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124312 len:8 PRP1 0x0 PRP2 0x0 00:25:16.721 [2024-11-06 13:21:51.858741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.721 [2024-11-06 13:21:51.858755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.721 [2024-11-06 13:21:51.858759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124320 len:8 PRP1 0x0 PRP2 0x0 00:25:16.721 [2024-11-06 13:21:51.858764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.721 [2024-11-06 13:21:51.858769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.721 [2024-11-06 13:21:51.858773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.721 [2024-11-06 13:21:51.858777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124328 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124336 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124344 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124352 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124360 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124368 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124376 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124384 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124392 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124400 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124408 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.858983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.858987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.858991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124416 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.858996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.859002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.859006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.859010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124424 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.859015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.859021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.859025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.859029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124432 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.859034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.859039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.859042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.859047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124440 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.859052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.859057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.859061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.859065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124448 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.859070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.859075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.859079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.859084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124456 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.859090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.859095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.859099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.859103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124464 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.859108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.869937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.869964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.869974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124472 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.869983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.869990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.869996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.870002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124480 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.870009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.870016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.870021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.870026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124488 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.870034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.870041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.870046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.870052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124496 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.870058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.870065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.722 [2024-11-06 13:21:51.870071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.722 [2024-11-06 13:21:51.870076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124504 len:8 PRP1 0x0 PRP2 0x0 00:25:16.722 [2024-11-06 13:21:51.870083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.722 [2024-11-06 13:21:51.870124] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:16.722 [2024-11-06 13:21:51.870152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.722 [2024-11-06 13:21:51.870161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.723 [2024-11-06 13:21:51.870175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.723 [2024-11-06 13:21:51.870181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.723 [2024-11-06 13:21:51.870190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.723 [2024-11-06 13:21:51.870197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.723 [2024-11-06 13:21:51.870204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:16.723 [2024-11-06 13:21:51.870212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.723 [2024-11-06 13:21:51.870219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:16.723 [2024-11-06 13:21:51.870261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb14fc0 (9): Bad file descriptor 00:25:16.723 [2024-11-06 13:21:51.873520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:16.723 [2024-11-06 13:21:51.907684] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:16.723 11941.89 IOPS, 46.65 MiB/s [2024-11-06T12:21:58.625Z] 12045.70 IOPS, 47.05 MiB/s [2024-11-06T12:21:58.625Z] 12142.00 IOPS, 47.43 MiB/s [2024-11-06T12:21:58.625Z] 12210.33 IOPS, 47.70 MiB/s [2024-11-06T12:21:58.625Z] 12259.23 IOPS, 47.89 MiB/s [2024-11-06T12:21:58.625Z] 12311.64 IOPS, 48.09 MiB/s [2024-11-06T12:21:58.625Z] 12358.53 IOPS, 48.28 MiB/s 00:25:16.723 Latency(us) 00:25:16.723 [2024-11-06T12:21:58.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.723 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:16.723 Verification LBA range: start 0x0 length 0x4000 00:25:16.723 NVMe0n1 : 15.01 12359.94 48.28 555.21 0.00 9889.24 535.89 31238.83 00:25:16.723 [2024-11-06T12:21:58.625Z] =================================================================================================================== 00:25:16.723 [2024-11-06T12:21:58.625Z] Total : 12359.94 48.28 555.21 0.00 9889.24 535.89 31238.83 00:25:16.723 Received shutdown signal, test time was about 15.000000 seconds 00:25:16.723 00:25:16.723 Latency(us) 00:25:16.723 [2024-11-06T12:21:58.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.723 [2024-11-06T12:21:58.625Z] =================================================================================================================== 00:25:16.723 [2024-11-06T12:21:58.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1842797 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1842797 /var/tmp/bdevperf.sock 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1842797 ']' 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:16.723 13:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:17.374 13:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:17.374 13:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:17.374 13:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:17.374 [2024-11-06 13:21:59.158670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:17.374 13:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:17.636 [2024-11-06 13:21:59.343124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:17.636 13:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.895 NVMe0n1 00:25:17.896 13:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:18.156 00:25:18.156 13:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:18.417 00:25:18.417 13:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.417 13:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:18.677 13:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.677 13:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:21.974 13:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.974 13:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:21.974 13:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1843900 00:25:21.974 13:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.974 13:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1843900 00:25:23.355 { 00:25:23.355 "results": [ 00:25:23.355 { 00:25:23.355 "job": "NVMe0n1", 00:25:23.355 "core_mask": "0x1", 00:25:23.355 "workload": "verify", 00:25:23.355 "status": "finished", 00:25:23.355 "verify_range": { 00:25:23.355 "start": 0, 00:25:23.355 "length": 16384 00:25:23.355 }, 00:25:23.355 "queue_depth": 128, 00:25:23.355 "io_size": 4096, 00:25:23.355 "runtime": 1.004248, 00:25:23.355 "iops": 12946.005369191675, 00:25:23.355 "mibps": 50.57033347340498, 00:25:23.355 "io_failed": 0, 00:25:23.355 "io_timeout": 0, 00:25:23.355 "avg_latency_us": 9853.173583570495, 00:25:23.355 "min_latency_us": 989.8666666666667, 00:25:23.355 "max_latency_us": 8465.066666666668 00:25:23.355 } 00:25:23.355 ], 00:25:23.355 "core_count": 1 00:25:23.355 } 00:25:23.355 13:22:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:23.355 [2024-11-06 13:21:58.202044] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:25:23.355 [2024-11-06 13:21:58.202103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842797 ] 00:25:23.355 [2024-11-06 13:21:58.284584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.355 [2024-11-06 13:21:58.314166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.355 [2024-11-06 13:22:00.536206] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:23.355 [2024-11-06 13:22:00.536250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.355 [2024-11-06 13:22:00.536260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.355 [2024-11-06 13:22:00.536267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.355 [2024-11-06 13:22:00.536272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.355 [2024-11-06 13:22:00.536278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.355 [2024-11-06 13:22:00.536284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.355 [2024-11-06 13:22:00.536289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.355 [2024-11-06 13:22:00.536294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.355 [2024-11-06 13:22:00.536300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:23.355 [2024-11-06 13:22:00.536321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:23.355 [2024-11-06 13:22:00.536332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcabfc0 (9): Bad file descriptor 00:25:23.355 [2024-11-06 13:22:00.542503] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:23.355 Running I/O for 1 seconds... 00:25:23.355 12873.00 IOPS, 50.29 MiB/s 00:25:23.355 Latency(us) 00:25:23.355 [2024-11-06T12:22:05.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.355 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:23.355 Verification LBA range: start 0x0 length 0x4000 00:25:23.355 NVMe0n1 : 1.00 12946.01 50.57 0.00 0.00 9853.17 989.87 8465.07 00:25:23.355 [2024-11-06T12:22:05.257Z] =================================================================================================================== 00:25:23.355 [2024-11-06T12:22:05.257Z] Total : 12946.01 50.57 0.00 0.00 9853.17 989.87 8465.07 00:25:23.355 13:22:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.355 13:22:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:23.355 13:22:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.355 13:22:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.355 13:22:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:23.616 13:22:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.876 13:22:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1842797 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1842797 ']' 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1842797 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1842797 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1842797' 00:25:27.172 killing process with pid 1842797 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1842797 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1842797 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:27.172 13:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:27.433 rmmod nvme_tcp 00:25:27.433 rmmod nvme_fabrics 00:25:27.433 rmmod nvme_keyring 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1839062 ']' 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1839062 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1839062 ']' 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1839062 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1839062 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1839062' 00:25:27.433 killing process with pid 1839062 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1839062 00:25:27.433 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1839062 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.693 13:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.615 13:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:29.615 00:25:29.615 real 0m40.431s 00:25:29.615 user 2m4.044s 00:25:29.615 sys 0m8.721s 00:25:29.615 13:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:29.615 13:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:29.615 ************************************ 00:25:29.615 END TEST nvmf_failover 00:25:29.615 ************************************ 00:25:29.876 13:22:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:29.876 13:22:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.877 ************************************ 00:25:29.877 START TEST nvmf_host_discovery 00:25:29.877 ************************************ 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:29.877 * Looking for test storage... 00:25:29.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:29.877 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.139 --rc genhtml_branch_coverage=1 00:25:30.139 --rc genhtml_function_coverage=1 00:25:30.139 --rc genhtml_legend=1 00:25:30.139 --rc geninfo_all_blocks=1 00:25:30.139 --rc geninfo_unexecuted_blocks=1 00:25:30.139 00:25:30.139 ' 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.139 --rc genhtml_branch_coverage=1 00:25:30.139 --rc genhtml_function_coverage=1 00:25:30.139 --rc genhtml_legend=1 00:25:30.139 --rc geninfo_all_blocks=1 00:25:30.139 --rc geninfo_unexecuted_blocks=1 00:25:30.139 00:25:30.139 ' 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.139 --rc genhtml_branch_coverage=1 00:25:30.139 --rc genhtml_function_coverage=1 00:25:30.139 --rc genhtml_legend=1 00:25:30.139 --rc geninfo_all_blocks=1 00:25:30.139 --rc geninfo_unexecuted_blocks=1 00:25:30.139 00:25:30.139 ' 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:30.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.139 --rc genhtml_branch_coverage=1 00:25:30.139 --rc genhtml_function_coverage=1 00:25:30.139 --rc genhtml_legend=1 00:25:30.139 --rc geninfo_all_blocks=1 00:25:30.139 --rc geninfo_unexecuted_blocks=1 00:25:30.139 00:25:30.139 ' 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.139 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.140 13:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:38.278 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:38.278 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:38.278 Found net devices under 0000:31:00.0: cvl_0_0 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.278 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:38.279 Found net devices under 0000:31:00.1: cvl_0_1 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:25:38.279 00:25:38.279 --- 10.0.0.2 ping statistics --- 00:25:38.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.279 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:25:38.279 00:25:38.279 --- 10.0.0.1 ping statistics --- 00:25:38.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.279 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1849193 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1849193 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1849193 ']' 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:38.279 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.279 [2024-11-06 13:22:19.511077] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:25:38.279 [2024-11-06 13:22:19.511141] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.279 [2024-11-06 13:22:19.610925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.279 [2024-11-06 13:22:19.662327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.279 [2024-11-06 13:22:19.662375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.279 [2024-11-06 13:22:19.662385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.279 [2024-11-06 13:22:19.662391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.279 [2024-11-06 13:22:19.662397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.279 [2024-11-06 13:22:19.663174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.540 [2024-11-06 13:22:20.377826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.540 [2024-11-06 13:22:20.390076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.540 null0 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.540 null1 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1849543 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1849543 /tmp/host.sock 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1849543 ']' 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:38.540 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:38.540 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.801 [2024-11-06 13:22:20.486080] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:25:38.801 [2024-11-06 13:22:20.486144] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849543 ] 00:25:38.801 [2024-11-06 13:22:20.579440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.801 [2024-11-06 13:22:20.633247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.742 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.003 [2024-11-06 13:22:21.645305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:25:40.003 13:22:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:40.572 [2024-11-06 13:22:22.325459] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:40.572 [2024-11-06 13:22:22.325478] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:40.572 [2024-11-06 13:22:22.325492] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:40.572 [2024-11-06 13:22:22.455924] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:40.833 [2024-11-06 13:22:22.555824] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:40.833 [2024-11-06 13:22:22.556788] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x17588c0:1 started. 00:25:40.833 [2024-11-06 13:22:22.558397] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:40.833 [2024-11-06 13:22:22.558413] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:40.833 [2024-11-06 13:22:22.565295] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x17588c0 was disconnected and freed. delete nvme_qpair. 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:41.093 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.094 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.354 13:22:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.354 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.615 [2024-11-06 13:22:23.337591] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1758aa0:1 started. 00:25:41.615 [2024-11-06 13:22:23.347166] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1758aa0 was disconnected and freed. delete nvme_qpair. 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.615 [2024-11-06 13:22:23.425974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:41.615 [2024-11-06 13:22:23.427155] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:41.615 [2024-11-06 13:22:23.427176] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.615 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:41.876 [2024-11-06 13:22:23.555027] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:41.876 13:22:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:41.876 [2024-11-06 13:22:23.654885] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:41.876 [2024-11-06 13:22:23.654921] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:41.876 [2024-11-06 13:22:23.654930] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:41.876 [2024-11-06 13:22:23.654935] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 [2024-11-06 13:22:24.701948] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:42.817 [2024-11-06 13:22:24.701970] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.817 [2024-11-06 13:22:24.702160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.817 [2024-11-06 13:22:24.702176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.817 [2024-11-06 13:22:24.702185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.817 [2024-11-06 13:22:24.702193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.817 [2024-11-06 13:22:24.702201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.817 [2024-11-06 13:22:24.702209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.817 [2024-11-06 13:22:24.702217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.817 [2024-11-06 13:22:24.702225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.817 [2024-11-06 13:22:24.702232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1728fd0 is same with the state(6) to be set 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:42.817 [2024-11-06 13:22:24.712171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1728fd0 (9): Bad file descriptor 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.817 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.080 [2024-11-06 13:22:24.722207] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:43.080 [2024-11-06 13:22:24.722221] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:43.080 [2024-11-06 13:22:24.722226] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.722232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:43.080 [2024-11-06 13:22:24.722249] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.722568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.080 [2024-11-06 13:22:24.722583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1728fd0 with addr=10.0.0.2, port=4420 00:25:43.080 [2024-11-06 13:22:24.722592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1728fd0 is same with the state(6) to be set 00:25:43.080 [2024-11-06 13:22:24.722604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1728fd0 (9): Bad file descriptor 00:25:43.080 [2024-11-06 13:22:24.722617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:43.080 [2024-11-06 13:22:24.722624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:43.080 [2024-11-06 13:22:24.722632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:43.080 [2024-11-06 13:22:24.722639] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:43.080 [2024-11-06 13:22:24.722645] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:43.080 [2024-11-06 13:22:24.722650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:43.080 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.080 [2024-11-06 13:22:24.732279] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:43.080 [2024-11-06 13:22:24.732292] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:43.080 [2024-11-06 13:22:24.732297] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.732301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:43.080 [2024-11-06 13:22:24.732316] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.732598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.080 [2024-11-06 13:22:24.732610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1728fd0 with addr=10.0.0.2, port=4420 00:25:43.080 [2024-11-06 13:22:24.732618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1728fd0 is same with the state(6) to be set 00:25:43.080 [2024-11-06 13:22:24.732634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1728fd0 (9): Bad file descriptor 00:25:43.080 [2024-11-06 13:22:24.732645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:43.080 [2024-11-06 13:22:24.732651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:43.080 [2024-11-06 13:22:24.732659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:43.080 [2024-11-06 13:22:24.732665] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:43.080 [2024-11-06 13:22:24.732670] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:43.080 [2024-11-06 13:22:24.732675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:43.080 [2024-11-06 13:22:24.742348] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:43.080 [2024-11-06 13:22:24.742359] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:43.080 [2024-11-06 13:22:24.742364] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.742368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:43.080 [2024-11-06 13:22:24.742382] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.742661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.080 [2024-11-06 13:22:24.742672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1728fd0 with addr=10.0.0.2, port=4420 00:25:43.080 [2024-11-06 13:22:24.742680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1728fd0 is same with the state(6) to be set 00:25:43.080 [2024-11-06 13:22:24.742691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1728fd0 (9): Bad file descriptor 00:25:43.080 [2024-11-06 13:22:24.742701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:43.080 [2024-11-06 13:22:24.742708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:43.080 [2024-11-06 13:22:24.742715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:43.080 [2024-11-06 13:22:24.742720] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:43.080 [2024-11-06 13:22:24.742725] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:43.080 [2024-11-06 13:22:24.742730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:43.080 [2024-11-06 13:22:24.752410] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:43.080 [2024-11-06 13:22:24.752420] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:43.080 [2024-11-06 13:22:24.752423] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.752427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:43.080 [2024-11-06 13:22:24.752437] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.752721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.080 [2024-11-06 13:22:24.752729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1728fd0 with addr=10.0.0.2, port=4420 00:25:43.080 [2024-11-06 13:22:24.752737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1728fd0 is same with the state(6) to be set 00:25:43.080 [2024-11-06 13:22:24.752749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1728fd0 (9): Bad file descriptor 00:25:43.080 [2024-11-06 13:22:24.752756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:43.080 [2024-11-06 13:22:24.752761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:43.080 [2024-11-06 13:22:24.752766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:43.080 [2024-11-06 13:22:24.752770] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:43.080 [2024-11-06 13:22:24.752774] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:43.080 [2024-11-06 13:22:24.752777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:43.080 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.080 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:43.080 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.080 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.080 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:43.080 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:43.080 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:43.080 [2024-11-06 13:22:24.762465] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:43.080 [2024-11-06 13:22:24.762474] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:43.080 [2024-11-06 13:22:24.762477] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.762481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:43.080 [2024-11-06 13:22:24.762490] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:43.080 [2024-11-06 13:22:24.762773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.080 [2024-11-06 13:22:24.762783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1728fd0 with addr=10.0.0.2, port=4420 00:25:43.080 [2024-11-06 13:22:24.762788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1728fd0 is same with the state(6) to be set 00:25:43.080 [2024-11-06 13:22:24.762796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1728fd0 (9): Bad file descriptor 00:25:43.080 [2024-11-06 13:22:24.762803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:43.080 [2024-11-06 13:22:24.762808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:43.080 [2024-11-06 13:22:24.762813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:43.080 [2024-11-06 13:22:24.762817] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:43.080 [2024-11-06 13:22:24.762821] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:43.080 [2024-11-06 13:22:24.762824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:43.080 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.081 [2024-11-06 13:22:24.772519] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:43.081 [2024-11-06 13:22:24.772529] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:43.081 [2024-11-06 13:22:24.772532] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:43.081 [2024-11-06 13:22:24.772535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:43.081 [2024-11-06 13:22:24.772546] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:43.081 [2024-11-06 13:22:24.772661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.081 [2024-11-06 13:22:24.772670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1728fd0 with addr=10.0.0.2, port=4420 00:25:43.081 [2024-11-06 13:22:24.772676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1728fd0 is same with the state(6) to be set 00:25:43.081 [2024-11-06 13:22:24.772684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1728fd0 (9): Bad file descriptor 00:25:43.081 [2024-11-06 13:22:24.772696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:43.081 [2024-11-06 13:22:24.772701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:43.081 [2024-11-06 13:22:24.772706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:43.081 [2024-11-06 13:22:24.772711] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:43.081 [2024-11-06 13:22:24.772714] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:43.081 [2024-11-06 13:22:24.772717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:43.081 [2024-11-06 13:22:24.782575] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:43.081 [2024-11-06 13:22:24.782583] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:43.081 [2024-11-06 13:22:24.782587] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:43.081 [2024-11-06 13:22:24.782590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:43.081 [2024-11-06 13:22:24.782600] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:43.081 [2024-11-06 13:22:24.783007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.081 [2024-11-06 13:22:24.783037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1728fd0 with addr=10.0.0.2, port=4420 00:25:43.081 [2024-11-06 13:22:24.783046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1728fd0 is same with the state(6) to be set 00:25:43.081 [2024-11-06 13:22:24.783060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1728fd0 (9): Bad file descriptor 00:25:43.081 [2024-11-06 13:22:24.783084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:43.081 [2024-11-06 13:22:24.783090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:43.081 [2024-11-06 13:22:24.783096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:43.081 [2024-11-06 13:22:24.783101] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:43.081 [2024-11-06 13:22:24.783105] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:43.081 [2024-11-06 13:22:24.783108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:43.081 [2024-11-06 13:22:24.789093] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:43.081 [2024-11-06 13:22:24.789107] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:43.081 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.082 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.343 13:22:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.343 13:22:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.286 [2024-11-06 13:22:26.086200] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:44.286 [2024-11-06 13:22:26.086214] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:44.286 [2024-11-06 13:22:26.086222] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.286 [2024-11-06 13:22:26.174473] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:44.548 [2024-11-06 13:22:26.279502] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:44.548 [2024-11-06 13:22:26.280151] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x17401d0:1 started. 00:25:44.548 [2024-11-06 13:22:26.281475] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.548 [2024-11-06 13:22:26.281497] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.548 [2024-11-06 13:22:26.284538] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x17401d0 was disconnected and freed. delete nvme_qpair. 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.548 request: 00:25:44.548 { 00:25:44.548 "name": "nvme", 00:25:44.548 "trtype": "tcp", 00:25:44.548 "traddr": "10.0.0.2", 00:25:44.548 "adrfam": "ipv4", 00:25:44.548 "trsvcid": "8009", 00:25:44.548 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.548 "wait_for_attach": true, 00:25:44.548 "method": "bdev_nvme_start_discovery", 00:25:44.548 "req_id": 1 00:25:44.548 } 00:25:44.548 Got JSON-RPC error response 00:25:44.548 response: 00:25:44.548 { 00:25:44.548 "code": -17, 00:25:44.548 "message": "File exists" 00:25:44.548 } 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:44.548 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.549 request: 00:25:44.549 { 00:25:44.549 "name": "nvme_second", 00:25:44.549 "trtype": "tcp", 00:25:44.549 "traddr": "10.0.0.2", 00:25:44.549 "adrfam": "ipv4", 00:25:44.549 "trsvcid": "8009", 00:25:44.549 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.549 "wait_for_attach": true, 00:25:44.549 "method": "bdev_nvme_start_discovery", 00:25:44.549 "req_id": 1 00:25:44.549 } 00:25:44.549 Got JSON-RPC error response 00:25:44.549 response: 00:25:44.549 { 00:25:44.549 "code": -17, 00:25:44.549 "message": "File exists" 00:25:44.549 } 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:44.549 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.810 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:44.810 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:44.810 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.811 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.755 [2024-11-06 13:22:27.541165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.755 [2024-11-06 13:22:27.541188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1742900 with addr=10.0.0.2, port=8010 00:25:45.755 [2024-11-06 13:22:27.541198] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:45.755 [2024-11-06 13:22:27.541204] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:45.755 [2024-11-06 13:22:27.541209] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:46.696 [2024-11-06 13:22:28.543490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.696 [2024-11-06 13:22:28.543509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1742900 with addr=10.0.0.2, port=8010 00:25:46.696 [2024-11-06 13:22:28.543517] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:46.696 [2024-11-06 13:22:28.543522] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:46.696 [2024-11-06 13:22:28.543527] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:48.076 [2024-11-06 13:22:29.545534] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:48.076 request: 00:25:48.076 { 00:25:48.076 "name": "nvme_second", 00:25:48.076 "trtype": "tcp", 00:25:48.076 "traddr": "10.0.0.2", 00:25:48.076 "adrfam": "ipv4", 00:25:48.076 "trsvcid": "8010", 00:25:48.076 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.076 "wait_for_attach": false, 00:25:48.076 "attach_timeout_ms": 3000, 00:25:48.076 "method": "bdev_nvme_start_discovery", 00:25:48.076 "req_id": 1 00:25:48.076 } 00:25:48.076 Got JSON-RPC error response 00:25:48.076 response: 00:25:48.076 { 00:25:48.076 "code": -110, 00:25:48.076 "message": "Connection timed out" 00:25:48.076 } 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1849543 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:48.076 rmmod nvme_tcp 00:25:48.076 rmmod nvme_fabrics 00:25:48.076 rmmod nvme_keyring 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1849193 ']' 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1849193 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 1849193 ']' 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 1849193 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1849193 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1849193' 00:25:48.076 killing process with pid 1849193 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 1849193 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 1849193 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.076 13:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.621 13:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:50.621 00:25:50.621 real 0m20.339s 00:25:50.621 user 0m23.395s 00:25:50.621 sys 0m7.276s 00:25:50.621 13:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:50.621 13:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.621 ************************************ 00:25:50.621 END TEST nvmf_host_discovery 00:25:50.621 ************************************ 00:25:50.621 13:22:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:50.621 13:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:50.621 13:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:50.621 13:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.621 ************************************ 00:25:50.621 START TEST nvmf_host_multipath_status 00:25:50.621 ************************************ 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:50.621 * Looking for test storage... 00:25:50.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:50.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.621 --rc genhtml_branch_coverage=1 00:25:50.621 --rc genhtml_function_coverage=1 00:25:50.621 --rc genhtml_legend=1 00:25:50.621 --rc geninfo_all_blocks=1 00:25:50.621 --rc geninfo_unexecuted_blocks=1 00:25:50.621 00:25:50.621 ' 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:50.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.621 --rc genhtml_branch_coverage=1 00:25:50.621 --rc genhtml_function_coverage=1 00:25:50.621 --rc genhtml_legend=1 00:25:50.621 --rc geninfo_all_blocks=1 00:25:50.621 --rc geninfo_unexecuted_blocks=1 00:25:50.621 00:25:50.621 ' 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:50.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.621 --rc genhtml_branch_coverage=1 00:25:50.621 --rc genhtml_function_coverage=1 00:25:50.621 --rc genhtml_legend=1 00:25:50.621 --rc geninfo_all_blocks=1 00:25:50.621 --rc geninfo_unexecuted_blocks=1 00:25:50.621 00:25:50.621 ' 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:50.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.621 --rc genhtml_branch_coverage=1 00:25:50.621 --rc genhtml_function_coverage=1 00:25:50.621 --rc genhtml_legend=1 00:25:50.621 --rc geninfo_all_blocks=1 00:25:50.621 --rc geninfo_unexecuted_blocks=1 00:25:50.621 00:25:50.621 ' 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.621 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.622 13:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.775 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.775 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.775 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.775 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.775 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.775 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.775 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:58.776 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:58.776 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:58.776 Found net devices under 0000:31:00.0: cvl_0_0 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:58.776 Found net devices under 0000:31:00.1: cvl_0_1 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.776 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:25:58.776 00:25:58.776 --- 10.0.0.2 ping statistics --- 00:25:58.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.777 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:25:58.777 00:25:58.777 --- 10.0.0.1 ping statistics --- 00:25:58.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.777 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1855713 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1855713 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1855713 ']' 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:58.777 13:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.777 [2024-11-06 13:22:39.947123] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:25:58.777 [2024-11-06 13:22:39.947192] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.777 [2024-11-06 13:22:40.048408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:58.777 [2024-11-06 13:22:40.103758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.777 [2024-11-06 13:22:40.103816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.777 [2024-11-06 13:22:40.103825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.777 [2024-11-06 13:22:40.103833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.777 [2024-11-06 13:22:40.103839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.777 [2024-11-06 13:22:40.105575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.777 [2024-11-06 13:22:40.105579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.039 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:59.039 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:59.039 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:59.039 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:59.039 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:59.039 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.039 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1855713 00:25:59.039 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:59.301 [2024-11-06 13:22:40.974107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.301 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:59.562 Malloc0 00:25:59.562 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:59.562 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:59.823 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.087 [2024-11-06 13:22:41.801112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.087 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:00.348 [2024-11-06 13:22:42.001650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:00.348 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1856112 00:26:00.348 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:00.348 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:00.349 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1856112 /var/tmp/bdevperf.sock 00:26:00.349 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1856112 ']' 00:26:00.349 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.349 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:00.349 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.349 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:00.349 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.291 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:01.291 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:01.291 13:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:01.292 13:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:01.863 Nvme0n1 00:26:01.863 13:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:02.123 Nvme0n1 00:26:02.123 13:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:02.123 13:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:04.036 13:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:04.036 13:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:04.296 13:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.296 13:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:05.681 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:05.681 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:05.681 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.682 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.682 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.682 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.682 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.682 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.682 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.682 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.682 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.682 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.942 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.942 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.942 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.942 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.203 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.203 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:06.203 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.203 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.463 13:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.463 13:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.463 13:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.463 13:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.463 13:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.463 13:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:06.463 13:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.723 13:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:06.983 13:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:07.923 13:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:07.923 13:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:07.923 13:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.923 13:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.183 13:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.183 13:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:08.183 13:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.183 13:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.183 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.183 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.183 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.183 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.445 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.445 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.445 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.445 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.704 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.704 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.704 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.704 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.704 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.704 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.704 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.705 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.964 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.964 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:08.964 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.224 13:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:09.224 13:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:10.606 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:10.606 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.606 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.607 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.607 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.607 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.607 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.607 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.607 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.607 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.607 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.607 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.867 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.867 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.867 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.867 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.127 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.127 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.127 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.127 13:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.387 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.387 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.387 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.387 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.387 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.387 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:11.387 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.663 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:12.018 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:13.045 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:13.045 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:13.045 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.045 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.045 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.045 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:13.045 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.045 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.306 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.306 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.306 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.306 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.306 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.306 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.306 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.306 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.567 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.567 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.567 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.567 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.828 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.828 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:13.828 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.828 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.828 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.828 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:13.828 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:14.088 13:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:14.348 13:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:15.288 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:15.288 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:15.289 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.289 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.549 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.549 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.549 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.549 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.549 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.549 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.549 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.549 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.810 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.810 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.810 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.810 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.069 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.069 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:16.069 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.069 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.330 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.330 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:16.330 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.330 13:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.330 13:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.330 13:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:16.330 13:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:16.589 13:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.849 13:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:17.788 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:17.788 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:17.788 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.788 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.047 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.047 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:18.047 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.047 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.047 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.048 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.048 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.048 13:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.307 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.307 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.307 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.307 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.567 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.567 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:18.567 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.567 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.827 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.827 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.827 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.827 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.827 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.827 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:19.088 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:19.088 13:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:19.348 13:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:19.348 13:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.732 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:20.992 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.992 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:20.992 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.992 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.252 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.252 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.252 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.252 13:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.252 13:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.252 13:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.252 13:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.252 13:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.513 13:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.513 13:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:21.513 13:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.772 13:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:22.032 13:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:22.972 13:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:22.972 13:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.972 13:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.972 13:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.234 13:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.234 13:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:23.234 13:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.234 13:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.234 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.234 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.234 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.234 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.494 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.494 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.494 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.494 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.754 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.754 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.754 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.754 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.754 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.754 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.754 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.754 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.014 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.015 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:24.015 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.275 13:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:24.275 13:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.659 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.919 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.919 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.919 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.919 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.180 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.180 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:26.180 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.180 13:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.180 13:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.180 13:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:26.180 13:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.180 13:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.441 13:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.441 13:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:26.441 13:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.701 13:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.961 13:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:27.904 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:27.904 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.904 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.904 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.164 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.164 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:28.164 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.164 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.164 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.164 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.164 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.164 13:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.424 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.424 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.424 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.424 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.683 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.683 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.683 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.683 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.683 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.683 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.684 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.684 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1856112 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1856112 ']' 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1856112 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1856112 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1856112' 00:26:28.944 killing process with pid 1856112 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1856112 00:26:28.944 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1856112 00:26:28.944 { 00:26:28.944 "results": [ 00:26:28.944 { 00:26:28.944 "job": "Nvme0n1", 00:26:28.944 "core_mask": "0x4", 00:26:28.944 "workload": "verify", 00:26:28.944 "status": "terminated", 00:26:28.944 "verify_range": { 00:26:28.944 "start": 0, 00:26:28.944 "length": 16384 00:26:28.944 }, 00:26:28.944 "queue_depth": 128, 00:26:28.944 "io_size": 4096, 00:26:28.944 "runtime": 26.858008, 00:26:28.944 "iops": 11966.67303100066, 00:26:28.944 "mibps": 46.74481652734633, 00:26:28.944 "io_failed": 0, 00:26:28.944 "io_timeout": 0, 00:26:28.944 "avg_latency_us": 10677.816746224602, 00:26:28.944 "min_latency_us": 573.44, 00:26:28.944 "max_latency_us": 3019898.88 00:26:28.944 } 00:26:28.944 ], 00:26:28.944 "core_count": 1 00:26:28.944 } 00:26:29.226 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1856112 00:26:29.226 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:29.226 [2024-11-06 13:22:42.084410] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:26:29.226 [2024-11-06 13:22:42.084485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856112 ] 00:26:29.226 [2024-11-06 13:22:42.179937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.226 [2024-11-06 13:22:42.230685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.226 Running I/O for 90 seconds... 00:26:29.226 10181.00 IOPS, 39.77 MiB/s [2024-11-06T12:23:11.128Z] 10796.00 IOPS, 42.17 MiB/s [2024-11-06T12:23:11.128Z] 10964.33 IOPS, 42.83 MiB/s [2024-11-06T12:23:11.128Z] 11301.00 IOPS, 44.14 MiB/s [2024-11-06T12:23:11.128Z] 11622.60 IOPS, 45.40 MiB/s [2024-11-06T12:23:11.128Z] 11823.33 IOPS, 46.18 MiB/s [2024-11-06T12:23:11.128Z] 11970.86 IOPS, 46.76 MiB/s [2024-11-06T12:23:11.128Z] 12105.25 IOPS, 47.29 MiB/s [2024-11-06T12:23:11.128Z] 12194.11 IOPS, 47.63 MiB/s [2024-11-06T12:23:11.128Z] 12256.20 IOPS, 47.88 MiB/s [2024-11-06T12:23:11.128Z] 12312.73 IOPS, 48.10 MiB/s [2024-11-06T12:23:11.128Z] [2024-11-06 13:22:55.845207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.226 [2024-11-06 13:22:55.845241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.226 [2024-11-06 13:22:55.845276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.226 [2024-11-06 13:22:55.845283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.226 [2024-11-06 13:22:55.845294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.226 [2024-11-06 13:22:55.845300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.226 [2024-11-06 13:22:55.845311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.226 [2024-11-06 13:22:55.845316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.226 [2024-11-06 13:22:55.845327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.226 [2024-11-06 13:22:55.845332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.226 [2024-11-06 13:22:55.845343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.226 [2024-11-06 13:22:55.845348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.226 [2024-11-06 13:22:55.845358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.226 [2024-11-06 13:22:55.845363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.226 [2024-11-06 13:22:55.845374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.226 [2024-11-06 13:22:55.845380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.226 [2024-11-06 13:22:55.845390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.226 [2024-11-06 13:22:55.845395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.845987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.845992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.846002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.846009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.227 [2024-11-06 13:22:55.846019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.227 [2024-11-06 13:22:55.846024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.846983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.846988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.228 [2024-11-06 13:22:55.847203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.228 [2024-11-06 13:22:55.847209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.229 [2024-11-06 13:22:55.847923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.229 [2024-11-06 13:22:55.847945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.229 [2024-11-06 13:22:55.847965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.229 [2024-11-06 13:22:55.847985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.847999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.229 [2024-11-06 13:22:55.848004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.848019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.229 [2024-11-06 13:22:55.848024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.848039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.229 [2024-11-06 13:22:55.848044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.229 [2024-11-06 13:22:55.848058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:22:55.848063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:22:55.848078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:22:55.848083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.230 12286.92 IOPS, 48.00 MiB/s [2024-11-06T12:23:11.132Z] 11341.77 IOPS, 44.30 MiB/s [2024-11-06T12:23:11.132Z] 10531.64 IOPS, 41.14 MiB/s [2024-11-06T12:23:11.132Z] 9891.40 IOPS, 38.64 MiB/s [2024-11-06T12:23:11.132Z] 10081.75 IOPS, 39.38 MiB/s [2024-11-06T12:23:11.132Z] 10248.76 IOPS, 40.03 MiB/s [2024-11-06T12:23:11.132Z] 10612.17 IOPS, 41.45 MiB/s [2024-11-06T12:23:11.132Z] 10944.37 IOPS, 42.75 MiB/s [2024-11-06T12:23:11.132Z] 11159.15 IOPS, 43.59 MiB/s [2024-11-06T12:23:11.132Z] 11247.62 IOPS, 43.94 MiB/s [2024-11-06T12:23:11.132Z] 11328.68 IOPS, 44.25 MiB/s [2024-11-06T12:23:11.132Z] 11536.91 IOPS, 45.07 MiB/s [2024-11-06T12:23:11.132Z] 11764.17 IOPS, 45.95 MiB/s [2024-11-06T12:23:11.132Z] [2024-11-06 13:23:08.587410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.230 [2024-11-06 13:23:08.587486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.230 [2024-11-06 13:23:08.587507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.230 [2024-11-06 13:23:08.587523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.230 [2024-11-06 13:23:08.587538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.230 [2024-11-06 13:23:08.587923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.587965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.587971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.588349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.588359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.230 [2024-11-06 13:23:08.588371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.230 [2024-11-06 13:23:08.588377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.231 [2024-11-06 13:23:08.588960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.231 [2024-11-06 13:23:08.588976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.588986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.231 [2024-11-06 13:23:08.588993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.589004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.231 [2024-11-06 13:23:08.589009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.589019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.231 [2024-11-06 13:23:08.589025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.231 [2024-11-06 13:23:08.589035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.232 [2024-11-06 13:23:08.589732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.589878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.589884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.590621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.232 [2024-11-06 13:23:08.590633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.232 [2024-11-06 13:23:08.590644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.590650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.590666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.590681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.590696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.590711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.590727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.590742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.590766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.590782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.590797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.590812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.590828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.590843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.590858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.590873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.590888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.590899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.590904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.233 [2024-11-06 13:23:08.592431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.592450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.592465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.592480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.592496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.592511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.592526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.592542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.592557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.233 [2024-11-06 13:23:08.592567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.233 [2024-11-06 13:23:08.592573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.592588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.592604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.592619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.592634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.592650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.592666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.592681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.592697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.592713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.592728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.592743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.592757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.592763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.593410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.593426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.593442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.593460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.593476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.593491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.593507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.234 [2024-11-06 13:23:08.593585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.593777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.593783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.594568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.594578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.234 [2024-11-06 13:23:08.594589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.234 [2024-11-06 13:23:08.594595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.594611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.594629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.594644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.594660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.594676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.594691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.594707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.594723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.594740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.594755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.594761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.606190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.606212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.606224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.606229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.606240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.606245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.606256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.606261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.606280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.606287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.606301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.606308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.606322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.606329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.607364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.607385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.607406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.607489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.607515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.607536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.235 [2024-11-06 13:23:08.607556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.235 [2024-11-06 13:23:08.607660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.235 [2024-11-06 13:23:08.607674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.607681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.607695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.607702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.607715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.607723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.607736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.607743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.607765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.607773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.607786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.607793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.607808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.607815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.607828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.607835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.607849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.607856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.607870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.607877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.608881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.608896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.608918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.608932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.608939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.608953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.608960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.608974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.608981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.608994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.236 [2024-11-06 13:23:08.609439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.236 [2024-11-06 13:23:08.609494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.236 [2024-11-06 13:23:08.609501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.609515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.609522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.609537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.609545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.609558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.609565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.609579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.609586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.237 [2024-11-06 13:23:08.611833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.237 [2024-11-06 13:23:08.611931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.237 [2024-11-06 13:23:08.611938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.611952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.611959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.613613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.613637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.613658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.613679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.613699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.613720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.613741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.613768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.613788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.613812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.613834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.613855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.613876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.613896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.613917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.613938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.613959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.613980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.613993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.614000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.614013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.614021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.614034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.614041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.614055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.614062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.614080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.614087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.614101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.614108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.614122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.614129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.614142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.614149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.614163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.614170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.614183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.614190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.615105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.615128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.615149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.615170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.615190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.615211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.238 [2024-11-06 13:23:08.615234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.615255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.615276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.238 [2024-11-06 13:23:08.615289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.238 [2024-11-06 13:23:08.615296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.615310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.615317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.615331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.615338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.616419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.616474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.616481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.617264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.617287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.617308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.239 [2024-11-06 13:23:08.617329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.617351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.617374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.617395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.617418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.617439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.239 [2024-11-06 13:23:08.617453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.239 [2024-11-06 13:23:08.617460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.617474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.617481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.617494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.617501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.617515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.617523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.617537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.617544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.617558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.617565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.240 [2024-11-06 13:23:08.618982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.240 [2024-11-06 13:23:08.618993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.240 [2024-11-06 13:23:08.618999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.620760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.620850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.620856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.621273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.621281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.621293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.621299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.621310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.621315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.621325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.241 [2024-11-06 13:23:08.621331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.621341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.621347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.621357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.621363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.621373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.241 [2024-11-06 13:23:08.621379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.241 [2024-11-06 13:23:08.621390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.621395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.622932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.622960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.622965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.623417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.623426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.623438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.623444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.623455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.242 [2024-11-06 13:23:08.623461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.623471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.623476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.623487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.623492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.623503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.623508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.623519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.623524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.623534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.242 [2024-11-06 13:23:08.623540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.242 [2024-11-06 13:23:08.623550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.623556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.623566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.623571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.623582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.623589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.623600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.623605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.623616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.623621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.623631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.623637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.623647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.623652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.623663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.623668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.623679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.623685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.624975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.624986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.243 [2024-11-06 13:23:08.624991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.243 [2024-11-06 13:23:08.626331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.243 [2024-11-06 13:23:08.626348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.626770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.626813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.626818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.627288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.627297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.627309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.627314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.627325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.244 [2024-11-06 13:23:08.627330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.627341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.627346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.627357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.244 [2024-11-06 13:23:08.627362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.244 [2024-11-06 13:23:08.627373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.627378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.627389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.627394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.627407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.627412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.627423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.627428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.627438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.627443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.627454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.627459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.627469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.627475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.627485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.627491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.627501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.627507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.628961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.628987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.628993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.629003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.245 [2024-11-06 13:23:08.629008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.629019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.245 [2024-11-06 13:23:08.629026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.245 [2024-11-06 13:23:08.629478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.629487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.629505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.629521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.629536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.629552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.629568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.629584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.629601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.629616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.629632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.629648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.629663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.629681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.629692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.629698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.246 [2024-11-06 13:23:08.631431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.246 [2024-11-06 13:23:08.631457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.246 [2024-11-06 13:23:08.631462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.631473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.631478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.631488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.631493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.631504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.631509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.631519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.631524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.632967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.632992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.632998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.633008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.247 [2024-11-06 13:23:08.633013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.633023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.633029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.247 [2024-11-06 13:23:08.633039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.247 [2024-11-06 13:23:08.633044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.633060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.633075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.633090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.633106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.633121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.633137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.633813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.633830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.633846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.633861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.633877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.633892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.633908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.633923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.633939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.633955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.633970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.633986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.633998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.634003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.634050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.634881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.634914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.634929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.248 [2024-11-06 13:23:08.634960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.248 [2024-11-06 13:23:08.634970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.248 [2024-11-06 13:23:08.634975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.634985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.634991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.635921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.635931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.635936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.636406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.636423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.636439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.636454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.636469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.636485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.636500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.636515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.636533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.636549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.636564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.636579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.636595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.249 [2024-11-06 13:23:08.636610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.249 [2024-11-06 13:23:08.636625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.249 [2024-11-06 13:23:08.636635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.636641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.636651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.636656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.636666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.636671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.636682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.636687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.636697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.636702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.636712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.636719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.636729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.636734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.636749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.636755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.636765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.636770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.636781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.636786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.637439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.637454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.637470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.637581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.637597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.637607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.637613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.638556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.638573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.638589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.638607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.638623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.638639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.638654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.638670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.638685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.638701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.638716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.250 [2024-11-06 13:23:08.638732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-11-06 13:23:08.638753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.250 [2024-11-06 13:23:08.638763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.638799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.638816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.638832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.638847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.638940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.638987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.638999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.639004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.639014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.639020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.640202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.640297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.640313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.640328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.640344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.640391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.640406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-11-06 13:23:08.640422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.251 [2024-11-06 13:23:08.640437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.251 [2024-11-06 13:23:08.640448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.640454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.640464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.640470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.641017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.641178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.641193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.641208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.641266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.641271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.642177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.642192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.642208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.642225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.642332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.642348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.252 [2024-11-06 13:23:08.642378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.252 [2024-11-06 13:23:08.642423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.252 [2024-11-06 13:23:08.642429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.642444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.642460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.642618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.642644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.642649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.643492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.643510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.643526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.643542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.643557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.643573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.643589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.643604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.643622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.643638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.643653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.643669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.643684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.643700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.643715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.643726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.643731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.644591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.644603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.644615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.644621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.644631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.253 [2024-11-06 13:23:08.644637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.644647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.644652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:29.253 [2024-11-06 13:23:08.644663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.253 [2024-11-06 13:23:08.644670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.644941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.644987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.644998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.645003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.645018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.645034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.645050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.645066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.645081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.645097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.645113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.645128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.645144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.645980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.645992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.646004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.646009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.646020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.646026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.646036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.646041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.646051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.646057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.646067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.646073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.646083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.254 [2024-11-06 13:23:08.646090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.646101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.646106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:29.254 [2024-11-06 13:23:08.646117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.254 [2024-11-06 13:23:08.646122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.646683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.646710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.646716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.647332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.647350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.647366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.647381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.647397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.647413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.647428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.647444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.647459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.647475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.647491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.647506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.647524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.647539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.647555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.255 [2024-11-06 13:23:08.647571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:29.255 [2024-11-06 13:23:08.647581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.255 [2024-11-06 13:23:08.647586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.647596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.647602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.648247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.648264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.648280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.648296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.648311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.648327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.648345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.648361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.648376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.648391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.648407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.648418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.648423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.256 [2024-11-06 13:23:08.649967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.649993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.649998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.650008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.650014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.650024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.650030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.256 [2024-11-06 13:23:08.650040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.256 [2024-11-06 13:23:08.650045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.257 [2024-11-06 13:23:08.650055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-11-06 13:23:08.650061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:29.257 [2024-11-06 13:23:08.650071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.257 [2024-11-06 13:23:08.650076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:29.257 11898.84 IOPS, 46.48 MiB/s [2024-11-06T12:23:11.159Z] 11938.00 IOPS, 46.63 MiB/s [2024-11-06T12:23:11.159Z] Received shutdown signal, test time was about 26.858627 seconds 00:26:29.257 00:26:29.257 Latency(us) 00:26:29.257 [2024-11-06T12:23:11.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.257 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:29.257 Verification LBA range: start 0x0 length 0x4000 00:26:29.257 Nvme0n1 : 26.86 11966.67 46.74 0.00 0.00 10677.82 573.44 3019898.88 00:26:29.257 [2024-11-06T12:23:11.159Z] =================================================================================================================== 00:26:29.257 [2024-11-06T12:23:11.159Z] Total : 11966.67 46.74 0.00 0.00 10677.82 573.44 3019898.88 00:26:29.257 13:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.257 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:29.257 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:29.519 rmmod nvme_tcp 00:26:29.519 rmmod nvme_fabrics 00:26:29.519 rmmod nvme_keyring 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1855713 ']' 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1855713 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1855713 ']' 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1855713 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1855713 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1855713' 00:26:29.519 killing process with pid 1855713 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1855713 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1855713 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.519 13:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:32.062 00:26:32.062 real 0m41.433s 00:26:32.062 user 1m46.807s 00:26:32.062 sys 0m11.709s 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:32.062 ************************************ 00:26:32.062 END TEST nvmf_host_multipath_status 00:26:32.062 ************************************ 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.062 ************************************ 00:26:32.062 START TEST nvmf_discovery_remove_ifc 00:26:32.062 ************************************ 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:32.062 * Looking for test storage... 00:26:32.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:32.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.062 --rc genhtml_branch_coverage=1 00:26:32.062 --rc genhtml_function_coverage=1 00:26:32.062 --rc genhtml_legend=1 00:26:32.062 --rc geninfo_all_blocks=1 00:26:32.062 --rc geninfo_unexecuted_blocks=1 00:26:32.062 00:26:32.062 ' 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:32.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.062 --rc genhtml_branch_coverage=1 00:26:32.062 --rc genhtml_function_coverage=1 00:26:32.062 --rc genhtml_legend=1 00:26:32.062 --rc geninfo_all_blocks=1 00:26:32.062 --rc geninfo_unexecuted_blocks=1 00:26:32.062 00:26:32.062 ' 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:32.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.062 --rc genhtml_branch_coverage=1 00:26:32.062 --rc genhtml_function_coverage=1 00:26:32.062 --rc genhtml_legend=1 00:26:32.062 --rc geninfo_all_blocks=1 00:26:32.062 --rc geninfo_unexecuted_blocks=1 00:26:32.062 00:26:32.062 ' 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:32.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.062 --rc genhtml_branch_coverage=1 00:26:32.062 --rc genhtml_function_coverage=1 00:26:32.062 --rc genhtml_legend=1 00:26:32.062 --rc geninfo_all_blocks=1 00:26:32.062 --rc geninfo_unexecuted_blocks=1 00:26:32.062 00:26:32.062 ' 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.062 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:32.063 13:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:40.196 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:40.196 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:40.196 Found net devices under 0000:31:00.0: cvl_0_0 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:40.196 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:40.197 Found net devices under 0000:31:00.1: cvl_0_1 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:40.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:26:40.197 00:26:40.197 --- 10.0.0.2 ping statistics --- 00:26:40.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.197 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:26:40.197 00:26:40.197 --- 10.0.0.1 ping statistics --- 00:26:40.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.197 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1866035 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1866035 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1866035 ']' 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:40.197 13:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.197 [2024-11-06 13:23:21.447250] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:26:40.197 [2024-11-06 13:23:21.447314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.197 [2024-11-06 13:23:21.549824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.197 [2024-11-06 13:23:21.600552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.197 [2024-11-06 13:23:21.600601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.197 [2024-11-06 13:23:21.600610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.197 [2024-11-06 13:23:21.600617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.197 [2024-11-06 13:23:21.600629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.197 [2024-11-06 13:23:21.601432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.458 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:40.458 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:40.458 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.458 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:40.458 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.458 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.458 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:40.458 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.458 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.458 [2024-11-06 13:23:22.336279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.458 [2024-11-06 13:23:22.344584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:40.458 null0 00:26:40.719 [2024-11-06 13:23:22.376497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1866377 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1866377 /tmp/host.sock 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1866377 ']' 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:40.719 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:40.719 13:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.719 [2024-11-06 13:23:22.454704] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:26:40.719 [2024-11-06 13:23:22.454772] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1866377 ] 00:26:40.719 [2024-11-06 13:23:22.549242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.719 [2024-11-06 13:23:22.602230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.660 13:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.600 [2024-11-06 13:23:24.435675] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:42.600 [2024-11-06 13:23:24.435696] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:42.600 [2024-11-06 13:23:24.435709] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:42.860 [2024-11-06 13:23:24.562114] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:43.121 [2024-11-06 13:23:24.785438] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:43.121 [2024-11-06 13:23:24.786470] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb9f550:1 started. 00:26:43.121 [2024-11-06 13:23:24.788027] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:43.121 [2024-11-06 13:23:24.788080] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:43.121 [2024-11-06 13:23:24.788101] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:43.121 [2024-11-06 13:23:24.788115] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:43.121 [2024-11-06 13:23:24.788136] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.121 [2024-11-06 13:23:24.835504] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb9f550 was disconnected and freed. delete nvme_qpair. 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.121 13:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.121 13:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.121 13:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.502 13:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.442 13:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:46.381 13:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.321 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.321 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.321 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.321 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.321 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.321 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.321 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.321 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.580 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:47.580 13:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:48.521 [2024-11-06 13:23:30.228624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:48.521 [2024-11-06 13:23:30.228670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.521 [2024-11-06 13:23:30.228681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.521 [2024-11-06 13:23:30.228690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.521 [2024-11-06 13:23:30.228695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.521 [2024-11-06 13:23:30.228701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.521 [2024-11-06 13:23:30.228706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.521 [2024-11-06 13:23:30.228712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.521 [2024-11-06 13:23:30.228717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.521 [2024-11-06 13:23:30.228723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.521 [2024-11-06 13:23:30.228728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.521 [2024-11-06 13:23:30.228734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7bec0 is same with the state(6) to be set 00:26:48.521 [2024-11-06 13:23:30.238646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7bec0 (9): Bad file descriptor 00:26:48.521 13:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.521 13:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.521 13:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.521 13:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.521 13:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.521 13:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.521 [2024-11-06 13:23:30.248683] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:48.521 [2024-11-06 13:23:30.248695] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:48.521 [2024-11-06 13:23:30.248701] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.521 [2024-11-06 13:23:30.248705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.521 [2024-11-06 13:23:30.248726] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:48.521 13:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.463 [2024-11-06 13:23:31.293809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:49.463 [2024-11-06 13:23:31.293900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7bec0 with addr=10.0.0.2, port=4420 00:26:49.463 [2024-11-06 13:23:31.293931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7bec0 is same with the state(6) to be set 00:26:49.463 [2024-11-06 13:23:31.293987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7bec0 (9): Bad file descriptor 00:26:49.463 [2024-11-06 13:23:31.295116] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:49.463 [2024-11-06 13:23:31.295187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.463 [2024-11-06 13:23:31.295209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.463 [2024-11-06 13:23:31.295234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.463 [2024-11-06 13:23:31.295254] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.463 [2024-11-06 13:23:31.295270] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.463 [2024-11-06 13:23:31.295283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.463 [2024-11-06 13:23:31.295305] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.463 [2024-11-06 13:23:31.295319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.463 13:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.463 13:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:49.463 13:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.405 [2024-11-06 13:23:32.297738] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:50.405 [2024-11-06 13:23:32.297757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:50.405 [2024-11-06 13:23:32.297766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:50.405 [2024-11-06 13:23:32.297771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:50.405 [2024-11-06 13:23:32.297781] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:50.405 [2024-11-06 13:23:32.297786] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:50.405 [2024-11-06 13:23:32.297789] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:50.405 [2024-11-06 13:23:32.297792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:50.405 [2024-11-06 13:23:32.297810] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:50.405 [2024-11-06 13:23:32.297827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.405 [2024-11-06 13:23:32.297834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.405 [2024-11-06 13:23:32.297841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.405 [2024-11-06 13:23:32.297846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.405 [2024-11-06 13:23:32.297852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.405 [2024-11-06 13:23:32.297857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.405 [2024-11-06 13:23:32.297862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.405 [2024-11-06 13:23:32.297867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.405 [2024-11-06 13:23:32.297873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.405 [2024-11-06 13:23:32.297878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.405 [2024-11-06 13:23:32.297883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:50.405 [2024-11-06 13:23:32.298308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6b600 (9): Bad file descriptor 00:26:50.405 [2024-11-06 13:23:32.299319] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:50.405 [2024-11-06 13:23:32.299328] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:50.666 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.666 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.666 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.666 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.666 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.666 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.666 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:50.667 13:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:52.052 13:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.622 [2024-11-06 13:23:34.310324] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:52.622 [2024-11-06 13:23:34.310338] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:52.622 [2024-11-06 13:23:34.310347] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.622 [2024-11-06 13:23:34.440723] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:52.882 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.882 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.882 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.882 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.882 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.882 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.882 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.882 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.882 [2024-11-06 13:23:34.622922] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:52.882 [2024-11-06 13:23:34.623599] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xb86540:1 started. 00:26:52.882 [2024-11-06 13:23:34.624498] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:52.882 [2024-11-06 13:23:34.624527] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:52.882 [2024-11-06 13:23:34.624541] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:52.882 [2024-11-06 13:23:34.624552] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:52.883 [2024-11-06 13:23:34.624558] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:52.883 [2024-11-06 13:23:34.628624] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xb86540 was disconnected and freed. delete nvme_qpair. 00:26:52.883 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:52.883 13:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1866377 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1866377 ']' 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1866377 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:53.823 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1866377 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1866377' 00:26:54.084 killing process with pid 1866377 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1866377 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1866377 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.084 rmmod nvme_tcp 00:26:54.084 rmmod nvme_fabrics 00:26:54.084 rmmod nvme_keyring 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1866035 ']' 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1866035 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1866035 ']' 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1866035 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:54.084 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1866035 00:26:54.345 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:54.345 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:54.345 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1866035' 00:26:54.345 killing process with pid 1866035 00:26:54.345 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1866035 00:26:54.345 13:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1866035 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.345 13:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.893 00:26:56.893 real 0m24.656s 00:26:56.893 user 0m29.732s 00:26:56.893 sys 0m7.241s 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.893 ************************************ 00:26:56.893 END TEST nvmf_discovery_remove_ifc 00:26:56.893 ************************************ 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.893 ************************************ 00:26:56.893 START TEST nvmf_identify_kernel_target 00:26:56.893 ************************************ 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:56.893 * Looking for test storage... 00:26:56.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.893 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:56.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.894 --rc genhtml_branch_coverage=1 00:26:56.894 --rc genhtml_function_coverage=1 00:26:56.894 --rc genhtml_legend=1 00:26:56.894 --rc geninfo_all_blocks=1 00:26:56.894 --rc geninfo_unexecuted_blocks=1 00:26:56.894 00:26:56.894 ' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:56.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.894 --rc genhtml_branch_coverage=1 00:26:56.894 --rc genhtml_function_coverage=1 00:26:56.894 --rc genhtml_legend=1 00:26:56.894 --rc geninfo_all_blocks=1 00:26:56.894 --rc geninfo_unexecuted_blocks=1 00:26:56.894 00:26:56.894 ' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:56.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.894 --rc genhtml_branch_coverage=1 00:26:56.894 --rc genhtml_function_coverage=1 00:26:56.894 --rc genhtml_legend=1 00:26:56.894 --rc geninfo_all_blocks=1 00:26:56.894 --rc geninfo_unexecuted_blocks=1 00:26:56.894 00:26:56.894 ' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:56.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.894 --rc genhtml_branch_coverage=1 00:26:56.894 --rc genhtml_function_coverage=1 00:26:56.894 --rc genhtml_legend=1 00:26:56.894 --rc geninfo_all_blocks=1 00:26:56.894 --rc geninfo_unexecuted_blocks=1 00:26:56.894 00:26:56.894 ' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.894 13:23:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.125 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:05.126 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:05.126 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:05.126 Found net devices under 0000:31:00.0: cvl_0_0 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:05.126 Found net devices under 0000:31:00.1: cvl_0_1 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.126 13:23:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:27:05.126 00:27:05.126 --- 10.0.0.2 ping statistics --- 00:27:05.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.126 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:27:05.126 00:27:05.126 --- 10.0.0.1 ping statistics --- 00:27:05.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.126 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:05.126 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:05.127 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:05.127 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:05.127 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:05.127 13:23:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:07.672 Waiting for block devices as requested 00:27:07.931 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:07.931 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:07.931 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:08.191 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:08.191 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:08.191 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:08.452 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:08.452 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:08.452 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:08.712 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:08.712 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:08.973 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:08.973 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:08.973 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:09.234 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:09.234 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:09.234 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:09.807 No valid GPT data, bailing 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:27:09.807 00:27:09.807 Discovery Log Number of Records 2, Generation counter 2 00:27:09.807 =====Discovery Log Entry 0====== 00:27:09.807 trtype: tcp 00:27:09.807 adrfam: ipv4 00:27:09.807 subtype: current discovery subsystem 00:27:09.807 treq: not specified, sq flow control disable supported 00:27:09.807 portid: 1 00:27:09.807 trsvcid: 4420 00:27:09.807 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:09.807 traddr: 10.0.0.1 00:27:09.807 eflags: none 00:27:09.807 sectype: none 00:27:09.807 =====Discovery Log Entry 1====== 00:27:09.807 trtype: tcp 00:27:09.807 adrfam: ipv4 00:27:09.807 subtype: nvme subsystem 00:27:09.807 treq: not specified, sq flow control disable supported 00:27:09.807 portid: 1 00:27:09.807 trsvcid: 4420 00:27:09.807 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:09.807 traddr: 10.0.0.1 00:27:09.807 eflags: none 00:27:09.807 sectype: none 00:27:09.807 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:09.807 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:09.807 ===================================================== 00:27:09.807 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:09.807 ===================================================== 00:27:09.807 Controller Capabilities/Features 00:27:09.807 ================================ 00:27:09.807 Vendor ID: 0000 00:27:09.807 Subsystem Vendor ID: 0000 00:27:09.807 Serial Number: 4762b7fa74b4ba9d093b 00:27:09.807 Model Number: Linux 00:27:09.807 Firmware Version: 6.8.9-20 00:27:09.808 Recommended Arb Burst: 0 00:27:09.808 IEEE OUI Identifier: 00 00 00 00:27:09.808 Multi-path I/O 00:27:09.808 May have multiple subsystem ports: No 00:27:09.808 May have multiple controllers: No 00:27:09.808 Associated with SR-IOV VF: No 00:27:09.808 Max Data Transfer Size: Unlimited 00:27:09.808 Max Number of Namespaces: 0 00:27:09.808 Max Number of I/O Queues: 1024 00:27:09.808 NVMe Specification Version (VS): 1.3 00:27:09.808 NVMe Specification Version (Identify): 1.3 00:27:09.808 Maximum Queue Entries: 1024 00:27:09.808 Contiguous Queues Required: No 00:27:09.808 Arbitration Mechanisms Supported 00:27:09.808 Weighted Round Robin: Not Supported 00:27:09.808 Vendor Specific: Not Supported 00:27:09.808 Reset Timeout: 7500 ms 00:27:09.808 Doorbell Stride: 4 bytes 00:27:09.808 NVM Subsystem Reset: Not Supported 00:27:09.808 Command Sets Supported 00:27:09.808 NVM Command Set: Supported 00:27:09.808 Boot Partition: Not Supported 00:27:09.808 Memory Page Size Minimum: 4096 bytes 00:27:09.808 Memory Page Size Maximum: 4096 bytes 00:27:09.808 Persistent Memory Region: Not Supported 00:27:09.808 Optional Asynchronous Events Supported 00:27:09.808 Namespace Attribute Notices: Not Supported 00:27:09.808 Firmware Activation Notices: Not Supported 00:27:09.808 ANA Change Notices: Not Supported 00:27:09.808 PLE Aggregate Log Change Notices: Not Supported 00:27:09.808 LBA Status Info Alert Notices: Not Supported 00:27:09.808 EGE Aggregate Log Change Notices: Not Supported 00:27:09.808 Normal NVM Subsystem Shutdown event: Not Supported 00:27:09.808 Zone Descriptor Change Notices: Not Supported 00:27:09.808 Discovery Log Change Notices: Supported 00:27:09.808 Controller Attributes 00:27:09.808 128-bit Host Identifier: Not Supported 00:27:09.808 Non-Operational Permissive Mode: Not Supported 00:27:09.808 NVM Sets: Not Supported 00:27:09.808 Read Recovery Levels: Not Supported 00:27:09.808 Endurance Groups: Not Supported 00:27:09.808 Predictable Latency Mode: Not Supported 00:27:09.808 Traffic Based Keep ALive: Not Supported 00:27:09.808 Namespace Granularity: Not Supported 00:27:09.808 SQ Associations: Not Supported 00:27:09.808 UUID List: Not Supported 00:27:09.808 Multi-Domain Subsystem: Not Supported 00:27:09.808 Fixed Capacity Management: Not Supported 00:27:09.808 Variable Capacity Management: Not Supported 00:27:09.808 Delete Endurance Group: Not Supported 00:27:09.808 Delete NVM Set: Not Supported 00:27:09.808 Extended LBA Formats Supported: Not Supported 00:27:09.808 Flexible Data Placement Supported: Not Supported 00:27:09.808 00:27:09.808 Controller Memory Buffer Support 00:27:09.808 ================================ 00:27:09.808 Supported: No 00:27:09.808 00:27:09.808 Persistent Memory Region Support 00:27:09.808 ================================ 00:27:09.808 Supported: No 00:27:09.808 00:27:09.808 Admin Command Set Attributes 00:27:09.808 ============================ 00:27:09.808 Security Send/Receive: Not Supported 00:27:09.808 Format NVM: Not Supported 00:27:09.808 Firmware Activate/Download: Not Supported 00:27:09.808 Namespace Management: Not Supported 00:27:09.808 Device Self-Test: Not Supported 00:27:09.808 Directives: Not Supported 00:27:09.808 NVMe-MI: Not Supported 00:27:09.808 Virtualization Management: Not Supported 00:27:09.808 Doorbell Buffer Config: Not Supported 00:27:09.808 Get LBA Status Capability: Not Supported 00:27:09.808 Command & Feature Lockdown Capability: Not Supported 00:27:09.808 Abort Command Limit: 1 00:27:09.808 Async Event Request Limit: 1 00:27:09.808 Number of Firmware Slots: N/A 00:27:09.808 Firmware Slot 1 Read-Only: N/A 00:27:09.808 Firmware Activation Without Reset: N/A 00:27:09.808 Multiple Update Detection Support: N/A 00:27:09.808 Firmware Update Granularity: No Information Provided 00:27:09.808 Per-Namespace SMART Log: No 00:27:09.808 Asymmetric Namespace Access Log Page: Not Supported 00:27:09.808 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:09.808 Command Effects Log Page: Not Supported 00:27:09.808 Get Log Page Extended Data: Supported 00:27:09.808 Telemetry Log Pages: Not Supported 00:27:09.808 Persistent Event Log Pages: Not Supported 00:27:09.808 Supported Log Pages Log Page: May Support 00:27:09.808 Commands Supported & Effects Log Page: Not Supported 00:27:09.808 Feature Identifiers & Effects Log Page:May Support 00:27:09.808 NVMe-MI Commands & Effects Log Page: May Support 00:27:09.808 Data Area 4 for Telemetry Log: Not Supported 00:27:09.808 Error Log Page Entries Supported: 1 00:27:09.808 Keep Alive: Not Supported 00:27:09.808 00:27:09.808 NVM Command Set Attributes 00:27:09.808 ========================== 00:27:09.808 Submission Queue Entry Size 00:27:09.808 Max: 1 00:27:09.808 Min: 1 00:27:09.808 Completion Queue Entry Size 00:27:09.808 Max: 1 00:27:09.808 Min: 1 00:27:09.808 Number of Namespaces: 0 00:27:09.808 Compare Command: Not Supported 00:27:09.808 Write Uncorrectable Command: Not Supported 00:27:09.808 Dataset Management Command: Not Supported 00:27:09.808 Write Zeroes Command: Not Supported 00:27:09.808 Set Features Save Field: Not Supported 00:27:09.808 Reservations: Not Supported 00:27:09.808 Timestamp: Not Supported 00:27:09.808 Copy: Not Supported 00:27:09.808 Volatile Write Cache: Not Present 00:27:09.808 Atomic Write Unit (Normal): 1 00:27:09.808 Atomic Write Unit (PFail): 1 00:27:09.808 Atomic Compare & Write Unit: 1 00:27:09.808 Fused Compare & Write: Not Supported 00:27:09.808 Scatter-Gather List 00:27:09.808 SGL Command Set: Supported 00:27:09.808 SGL Keyed: Not Supported 00:27:09.808 SGL Bit Bucket Descriptor: Not Supported 00:27:09.808 SGL Metadata Pointer: Not Supported 00:27:09.808 Oversized SGL: Not Supported 00:27:09.808 SGL Metadata Address: Not Supported 00:27:09.808 SGL Offset: Supported 00:27:09.808 Transport SGL Data Block: Not Supported 00:27:09.808 Replay Protected Memory Block: Not Supported 00:27:09.808 00:27:09.808 Firmware Slot Information 00:27:09.808 ========================= 00:27:09.808 Active slot: 0 00:27:09.808 00:27:09.808 00:27:09.808 Error Log 00:27:09.808 ========= 00:27:09.808 00:27:09.808 Active Namespaces 00:27:09.808 ================= 00:27:09.808 Discovery Log Page 00:27:09.808 ================== 00:27:09.808 Generation Counter: 2 00:27:09.808 Number of Records: 2 00:27:09.808 Record Format: 0 00:27:09.808 00:27:09.808 Discovery Log Entry 0 00:27:09.808 ---------------------- 00:27:09.808 Transport Type: 3 (TCP) 00:27:09.808 Address Family: 1 (IPv4) 00:27:09.808 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:09.808 Entry Flags: 00:27:09.808 Duplicate Returned Information: 0 00:27:09.808 Explicit Persistent Connection Support for Discovery: 0 00:27:09.808 Transport Requirements: 00:27:09.808 Secure Channel: Not Specified 00:27:09.808 Port ID: 1 (0x0001) 00:27:09.808 Controller ID: 65535 (0xffff) 00:27:09.808 Admin Max SQ Size: 32 00:27:09.808 Transport Service Identifier: 4420 00:27:09.808 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:09.808 Transport Address: 10.0.0.1 00:27:09.808 Discovery Log Entry 1 00:27:09.808 ---------------------- 00:27:09.808 Transport Type: 3 (TCP) 00:27:09.808 Address Family: 1 (IPv4) 00:27:09.808 Subsystem Type: 2 (NVM Subsystem) 00:27:09.808 Entry Flags: 00:27:09.808 Duplicate Returned Information: 0 00:27:09.808 Explicit Persistent Connection Support for Discovery: 0 00:27:09.808 Transport Requirements: 00:27:09.808 Secure Channel: Not Specified 00:27:09.808 Port ID: 1 (0x0001) 00:27:09.808 Controller ID: 65535 (0xffff) 00:27:09.808 Admin Max SQ Size: 32 00:27:09.808 Transport Service Identifier: 4420 00:27:09.808 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:09.808 Transport Address: 10.0.0.1 00:27:09.808 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:10.070 get_feature(0x01) failed 00:27:10.070 get_feature(0x02) failed 00:27:10.070 get_feature(0x04) failed 00:27:10.070 ===================================================== 00:27:10.070 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:10.070 ===================================================== 00:27:10.070 Controller Capabilities/Features 00:27:10.070 ================================ 00:27:10.070 Vendor ID: 0000 00:27:10.070 Subsystem Vendor ID: 0000 00:27:10.070 Serial Number: 640d26651f37f88f8450 00:27:10.070 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:10.070 Firmware Version: 6.8.9-20 00:27:10.070 Recommended Arb Burst: 6 00:27:10.070 IEEE OUI Identifier: 00 00 00 00:27:10.070 Multi-path I/O 00:27:10.070 May have multiple subsystem ports: Yes 00:27:10.070 May have multiple controllers: Yes 00:27:10.070 Associated with SR-IOV VF: No 00:27:10.070 Max Data Transfer Size: Unlimited 00:27:10.070 Max Number of Namespaces: 1024 00:27:10.070 Max Number of I/O Queues: 128 00:27:10.070 NVMe Specification Version (VS): 1.3 00:27:10.070 NVMe Specification Version (Identify): 1.3 00:27:10.070 Maximum Queue Entries: 1024 00:27:10.070 Contiguous Queues Required: No 00:27:10.070 Arbitration Mechanisms Supported 00:27:10.070 Weighted Round Robin: Not Supported 00:27:10.070 Vendor Specific: Not Supported 00:27:10.070 Reset Timeout: 7500 ms 00:27:10.070 Doorbell Stride: 4 bytes 00:27:10.070 NVM Subsystem Reset: Not Supported 00:27:10.070 Command Sets Supported 00:27:10.070 NVM Command Set: Supported 00:27:10.070 Boot Partition: Not Supported 00:27:10.070 Memory Page Size Minimum: 4096 bytes 00:27:10.070 Memory Page Size Maximum: 4096 bytes 00:27:10.070 Persistent Memory Region: Not Supported 00:27:10.070 Optional Asynchronous Events Supported 00:27:10.070 Namespace Attribute Notices: Supported 00:27:10.070 Firmware Activation Notices: Not Supported 00:27:10.070 ANA Change Notices: Supported 00:27:10.070 PLE Aggregate Log Change Notices: Not Supported 00:27:10.070 LBA Status Info Alert Notices: Not Supported 00:27:10.070 EGE Aggregate Log Change Notices: Not Supported 00:27:10.070 Normal NVM Subsystem Shutdown event: Not Supported 00:27:10.070 Zone Descriptor Change Notices: Not Supported 00:27:10.070 Discovery Log Change Notices: Not Supported 00:27:10.070 Controller Attributes 00:27:10.070 128-bit Host Identifier: Supported 00:27:10.070 Non-Operational Permissive Mode: Not Supported 00:27:10.070 NVM Sets: Not Supported 00:27:10.070 Read Recovery Levels: Not Supported 00:27:10.070 Endurance Groups: Not Supported 00:27:10.070 Predictable Latency Mode: Not Supported 00:27:10.070 Traffic Based Keep ALive: Supported 00:27:10.070 Namespace Granularity: Not Supported 00:27:10.070 SQ Associations: Not Supported 00:27:10.070 UUID List: Not Supported 00:27:10.070 Multi-Domain Subsystem: Not Supported 00:27:10.070 Fixed Capacity Management: Not Supported 00:27:10.070 Variable Capacity Management: Not Supported 00:27:10.070 Delete Endurance Group: Not Supported 00:27:10.070 Delete NVM Set: Not Supported 00:27:10.070 Extended LBA Formats Supported: Not Supported 00:27:10.070 Flexible Data Placement Supported: Not Supported 00:27:10.070 00:27:10.070 Controller Memory Buffer Support 00:27:10.070 ================================ 00:27:10.070 Supported: No 00:27:10.070 00:27:10.070 Persistent Memory Region Support 00:27:10.070 ================================ 00:27:10.070 Supported: No 00:27:10.070 00:27:10.070 Admin Command Set Attributes 00:27:10.070 ============================ 00:27:10.070 Security Send/Receive: Not Supported 00:27:10.070 Format NVM: Not Supported 00:27:10.070 Firmware Activate/Download: Not Supported 00:27:10.070 Namespace Management: Not Supported 00:27:10.070 Device Self-Test: Not Supported 00:27:10.070 Directives: Not Supported 00:27:10.070 NVMe-MI: Not Supported 00:27:10.070 Virtualization Management: Not Supported 00:27:10.070 Doorbell Buffer Config: Not Supported 00:27:10.070 Get LBA Status Capability: Not Supported 00:27:10.070 Command & Feature Lockdown Capability: Not Supported 00:27:10.070 Abort Command Limit: 4 00:27:10.070 Async Event Request Limit: 4 00:27:10.070 Number of Firmware Slots: N/A 00:27:10.070 Firmware Slot 1 Read-Only: N/A 00:27:10.070 Firmware Activation Without Reset: N/A 00:27:10.070 Multiple Update Detection Support: N/A 00:27:10.070 Firmware Update Granularity: No Information Provided 00:27:10.070 Per-Namespace SMART Log: Yes 00:27:10.070 Asymmetric Namespace Access Log Page: Supported 00:27:10.070 ANA Transition Time : 10 sec 00:27:10.070 00:27:10.070 Asymmetric Namespace Access Capabilities 00:27:10.070 ANA Optimized State : Supported 00:27:10.070 ANA Non-Optimized State : Supported 00:27:10.070 ANA Inaccessible State : Supported 00:27:10.070 ANA Persistent Loss State : Supported 00:27:10.070 ANA Change State : Supported 00:27:10.070 ANAGRPID is not changed : No 00:27:10.070 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:10.070 00:27:10.070 ANA Group Identifier Maximum : 128 00:27:10.070 Number of ANA Group Identifiers : 128 00:27:10.070 Max Number of Allowed Namespaces : 1024 00:27:10.070 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:10.070 Command Effects Log Page: Supported 00:27:10.070 Get Log Page Extended Data: Supported 00:27:10.070 Telemetry Log Pages: Not Supported 00:27:10.070 Persistent Event Log Pages: Not Supported 00:27:10.070 Supported Log Pages Log Page: May Support 00:27:10.070 Commands Supported & Effects Log Page: Not Supported 00:27:10.070 Feature Identifiers & Effects Log Page:May Support 00:27:10.070 NVMe-MI Commands & Effects Log Page: May Support 00:27:10.070 Data Area 4 for Telemetry Log: Not Supported 00:27:10.070 Error Log Page Entries Supported: 128 00:27:10.070 Keep Alive: Supported 00:27:10.070 Keep Alive Granularity: 1000 ms 00:27:10.070 00:27:10.070 NVM Command Set Attributes 00:27:10.070 ========================== 00:27:10.070 Submission Queue Entry Size 00:27:10.070 Max: 64 00:27:10.070 Min: 64 00:27:10.070 Completion Queue Entry Size 00:27:10.070 Max: 16 00:27:10.070 Min: 16 00:27:10.070 Number of Namespaces: 1024 00:27:10.070 Compare Command: Not Supported 00:27:10.070 Write Uncorrectable Command: Not Supported 00:27:10.070 Dataset Management Command: Supported 00:27:10.070 Write Zeroes Command: Supported 00:27:10.071 Set Features Save Field: Not Supported 00:27:10.071 Reservations: Not Supported 00:27:10.071 Timestamp: Not Supported 00:27:10.071 Copy: Not Supported 00:27:10.071 Volatile Write Cache: Present 00:27:10.071 Atomic Write Unit (Normal): 1 00:27:10.071 Atomic Write Unit (PFail): 1 00:27:10.071 Atomic Compare & Write Unit: 1 00:27:10.071 Fused Compare & Write: Not Supported 00:27:10.071 Scatter-Gather List 00:27:10.071 SGL Command Set: Supported 00:27:10.071 SGL Keyed: Not Supported 00:27:10.071 SGL Bit Bucket Descriptor: Not Supported 00:27:10.071 SGL Metadata Pointer: Not Supported 00:27:10.071 Oversized SGL: Not Supported 00:27:10.071 SGL Metadata Address: Not Supported 00:27:10.071 SGL Offset: Supported 00:27:10.071 Transport SGL Data Block: Not Supported 00:27:10.071 Replay Protected Memory Block: Not Supported 00:27:10.071 00:27:10.071 Firmware Slot Information 00:27:10.071 ========================= 00:27:10.071 Active slot: 0 00:27:10.071 00:27:10.071 Asymmetric Namespace Access 00:27:10.071 =========================== 00:27:10.071 Change Count : 0 00:27:10.071 Number of ANA Group Descriptors : 1 00:27:10.071 ANA Group Descriptor : 0 00:27:10.071 ANA Group ID : 1 00:27:10.071 Number of NSID Values : 1 00:27:10.071 Change Count : 0 00:27:10.071 ANA State : 1 00:27:10.071 Namespace Identifier : 1 00:27:10.071 00:27:10.071 Commands Supported and Effects 00:27:10.071 ============================== 00:27:10.071 Admin Commands 00:27:10.071 -------------- 00:27:10.071 Get Log Page (02h): Supported 00:27:10.071 Identify (06h): Supported 00:27:10.071 Abort (08h): Supported 00:27:10.071 Set Features (09h): Supported 00:27:10.071 Get Features (0Ah): Supported 00:27:10.071 Asynchronous Event Request (0Ch): Supported 00:27:10.071 Keep Alive (18h): Supported 00:27:10.071 I/O Commands 00:27:10.071 ------------ 00:27:10.071 Flush (00h): Supported 00:27:10.071 Write (01h): Supported LBA-Change 00:27:10.071 Read (02h): Supported 00:27:10.071 Write Zeroes (08h): Supported LBA-Change 00:27:10.071 Dataset Management (09h): Supported 00:27:10.071 00:27:10.071 Error Log 00:27:10.071 ========= 00:27:10.071 Entry: 0 00:27:10.071 Error Count: 0x3 00:27:10.071 Submission Queue Id: 0x0 00:27:10.071 Command Id: 0x5 00:27:10.071 Phase Bit: 0 00:27:10.071 Status Code: 0x2 00:27:10.071 Status Code Type: 0x0 00:27:10.071 Do Not Retry: 1 00:27:10.071 Error Location: 0x28 00:27:10.071 LBA: 0x0 00:27:10.071 Namespace: 0x0 00:27:10.071 Vendor Log Page: 0x0 00:27:10.071 ----------- 00:27:10.071 Entry: 1 00:27:10.071 Error Count: 0x2 00:27:10.071 Submission Queue Id: 0x0 00:27:10.071 Command Id: 0x5 00:27:10.071 Phase Bit: 0 00:27:10.071 Status Code: 0x2 00:27:10.071 Status Code Type: 0x0 00:27:10.071 Do Not Retry: 1 00:27:10.071 Error Location: 0x28 00:27:10.071 LBA: 0x0 00:27:10.071 Namespace: 0x0 00:27:10.071 Vendor Log Page: 0x0 00:27:10.071 ----------- 00:27:10.071 Entry: 2 00:27:10.071 Error Count: 0x1 00:27:10.071 Submission Queue Id: 0x0 00:27:10.071 Command Id: 0x4 00:27:10.071 Phase Bit: 0 00:27:10.071 Status Code: 0x2 00:27:10.071 Status Code Type: 0x0 00:27:10.071 Do Not Retry: 1 00:27:10.071 Error Location: 0x28 00:27:10.071 LBA: 0x0 00:27:10.071 Namespace: 0x0 00:27:10.071 Vendor Log Page: 0x0 00:27:10.071 00:27:10.071 Number of Queues 00:27:10.071 ================ 00:27:10.071 Number of I/O Submission Queues: 128 00:27:10.071 Number of I/O Completion Queues: 128 00:27:10.071 00:27:10.071 ZNS Specific Controller Data 00:27:10.071 ============================ 00:27:10.071 Zone Append Size Limit: 0 00:27:10.071 00:27:10.071 00:27:10.071 Active Namespaces 00:27:10.071 ================= 00:27:10.071 get_feature(0x05) failed 00:27:10.071 Namespace ID:1 00:27:10.071 Command Set Identifier: NVM (00h) 00:27:10.071 Deallocate: Supported 00:27:10.071 Deallocated/Unwritten Error: Not Supported 00:27:10.071 Deallocated Read Value: Unknown 00:27:10.071 Deallocate in Write Zeroes: Not Supported 00:27:10.071 Deallocated Guard Field: 0xFFFF 00:27:10.071 Flush: Supported 00:27:10.071 Reservation: Not Supported 00:27:10.071 Namespace Sharing Capabilities: Multiple Controllers 00:27:10.071 Size (in LBAs): 3750748848 (1788GiB) 00:27:10.071 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:10.071 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:10.071 UUID: 046066ae-4aca-4078-8071-5b724c96c07c 00:27:10.071 Thin Provisioning: Not Supported 00:27:10.071 Per-NS Atomic Units: Yes 00:27:10.071 Atomic Write Unit (Normal): 8 00:27:10.071 Atomic Write Unit (PFail): 8 00:27:10.071 Preferred Write Granularity: 8 00:27:10.071 Atomic Compare & Write Unit: 8 00:27:10.071 Atomic Boundary Size (Normal): 0 00:27:10.071 Atomic Boundary Size (PFail): 0 00:27:10.071 Atomic Boundary Offset: 0 00:27:10.071 NGUID/EUI64 Never Reused: No 00:27:10.071 ANA group ID: 1 00:27:10.071 Namespace Write Protected: No 00:27:10.071 Number of LBA Formats: 1 00:27:10.071 Current LBA Format: LBA Format #00 00:27:10.071 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:10.071 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:10.071 rmmod nvme_tcp 00:27:10.071 rmmod nvme_fabrics 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.071 13:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:12.613 13:23:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:12.613 13:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.910 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:15.910 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:16.480 00:27:16.480 real 0m19.895s 00:27:16.480 user 0m5.496s 00:27:16.480 sys 0m11.365s 00:27:16.480 13:23:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:16.481 13:23:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:16.481 ************************************ 00:27:16.481 END TEST nvmf_identify_kernel_target 00:27:16.481 ************************************ 00:27:16.481 13:23:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:16.481 13:23:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:16.481 13:23:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:16.481 13:23:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.481 ************************************ 00:27:16.481 START TEST nvmf_auth_host 00:27:16.481 ************************************ 00:27:16.481 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:16.481 * Looking for test storage... 00:27:16.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.481 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:16.481 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:16.481 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.741 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:16.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.742 --rc genhtml_branch_coverage=1 00:27:16.742 --rc genhtml_function_coverage=1 00:27:16.742 --rc genhtml_legend=1 00:27:16.742 --rc geninfo_all_blocks=1 00:27:16.742 --rc geninfo_unexecuted_blocks=1 00:27:16.742 00:27:16.742 ' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:16.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.742 --rc genhtml_branch_coverage=1 00:27:16.742 --rc genhtml_function_coverage=1 00:27:16.742 --rc genhtml_legend=1 00:27:16.742 --rc geninfo_all_blocks=1 00:27:16.742 --rc geninfo_unexecuted_blocks=1 00:27:16.742 00:27:16.742 ' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:16.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.742 --rc genhtml_branch_coverage=1 00:27:16.742 --rc genhtml_function_coverage=1 00:27:16.742 --rc genhtml_legend=1 00:27:16.742 --rc geninfo_all_blocks=1 00:27:16.742 --rc geninfo_unexecuted_blocks=1 00:27:16.742 00:27:16.742 ' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:16.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.742 --rc genhtml_branch_coverage=1 00:27:16.742 --rc genhtml_function_coverage=1 00:27:16.742 --rc genhtml_legend=1 00:27:16.742 --rc geninfo_all_blocks=1 00:27:16.742 --rc geninfo_unexecuted_blocks=1 00:27:16.742 00:27:16.742 ' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:16.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.742 13:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:24.877 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:24.877 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:24.877 Found net devices under 0000:31:00.0: cvl_0_0 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:24.877 Found net devices under 0000:31:00.1: cvl_0_1 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.877 13:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:27:24.877 00:27:24.877 --- 10.0.0.2 ping statistics --- 00:27:24.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.877 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:27:24.877 00:27:24.877 --- 10.0.0.1 ping statistics --- 00:27:24.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.877 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1881075 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1881075 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:24.877 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1881075 ']' 00:27:24.878 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.878 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:24.878 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.878 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:24.878 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b9d7c0577e62ca5cf8d21347f637db9 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.EFz 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b9d7c0577e62ca5cf8d21347f637db9 0 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b9d7c0577e62ca5cf8d21347f637db9 0 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b9d7c0577e62ca5cf8d21347f637db9 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:25.138 13:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.EFz 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.EFz 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.EFz 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0ce97ff47b8b5da196986346518dc6017fa017d266b0c38822243e90d5ad795f 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MGc 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0ce97ff47b8b5da196986346518dc6017fa017d266b0c38822243e90d5ad795f 3 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0ce97ff47b8b5da196986346518dc6017fa017d266b0c38822243e90d5ad795f 3 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0ce97ff47b8b5da196986346518dc6017fa017d266b0c38822243e90d5ad795f 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MGc 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MGc 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MGc 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6c8021a2179cc767448606976417401a292ae9c6d6c59633 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.D4j 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6c8021a2179cc767448606976417401a292ae9c6d6c59633 0 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6c8021a2179cc767448606976417401a292ae9c6d6c59633 0 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6c8021a2179cc767448606976417401a292ae9c6d6c59633 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.D4j 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.D4j 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.D4j 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.399 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6fcf738b29f20b427fd7a6ae61d16866c236601b9ba32974 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.sN2 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6fcf738b29f20b427fd7a6ae61d16866c236601b9ba32974 2 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6fcf738b29f20b427fd7a6ae61d16866c236601b9ba32974 2 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6fcf738b29f20b427fd7a6ae61d16866c236601b9ba32974 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.sN2 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.sN2 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sN2 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1f5c7d4cafa2b8a8d05af2496ead992a 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jwO 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1f5c7d4cafa2b8a8d05af2496ead992a 1 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1f5c7d4cafa2b8a8d05af2496ead992a 1 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1f5c7d4cafa2b8a8d05af2496ead992a 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:25.400 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jwO 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jwO 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jwO 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4301b91e554b022984c0c6c33b12a68d 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.pTz 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4301b91e554b022984c0c6c33b12a68d 1 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4301b91e554b022984c0c6c33b12a68d 1 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4301b91e554b022984c0c6c33b12a68d 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.pTz 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.pTz 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.pTz 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c65989b6c8107a35ede2b6e35c9d1c99f66cfa62e666f001 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.O4B 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c65989b6c8107a35ede2b6e35c9d1c99f66cfa62e666f001 2 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c65989b6c8107a35ede2b6e35c9d1c99f66cfa62e666f001 2 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c65989b6c8107a35ede2b6e35c9d1c99f66cfa62e666f001 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.O4B 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.O4B 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.O4B 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bad62b5fa37cc330e2bbc0c4c1dde9d3 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Ifc 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bad62b5fa37cc330e2bbc0c4c1dde9d3 0 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bad62b5fa37cc330e2bbc0c4c1dde9d3 0 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bad62b5fa37cc330e2bbc0c4c1dde9d3 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Ifc 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Ifc 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Ifc 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b4d257dd1a68fa1b7f21e09407209673fdc73ee40d99940d1ec8322fafc0692 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.HUA 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b4d257dd1a68fa1b7f21e09407209673fdc73ee40d99940d1ec8322fafc0692 3 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b4d257dd1a68fa1b7f21e09407209673fdc73ee40d99940d1ec8322fafc0692 3 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b4d257dd1a68fa1b7f21e09407209673fdc73ee40d99940d1ec8322fafc0692 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:25.660 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.HUA 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.HUA 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.HUA 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1881075 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1881075 ']' 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EFz 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MGc ]] 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MGc 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.920 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.D4j 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sN2 ]] 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sN2 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jwO 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.pTz ]] 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pTz 00:27:26.181 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.O4B 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Ifc ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Ifc 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.HUA 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:26.182 13:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:29.480 Waiting for block devices as requested 00:27:29.739 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:29.739 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:29.739 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:29.999 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:29.999 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:29.999 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:29.999 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:30.259 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:30.259 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:30.518 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:30.518 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:30.518 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:30.778 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:30.778 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:30.778 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:30.778 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:31.037 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:31.976 No valid GPT data, bailing 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:31.976 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:27:31.977 00:27:31.977 Discovery Log Number of Records 2, Generation counter 2 00:27:31.977 =====Discovery Log Entry 0====== 00:27:31.977 trtype: tcp 00:27:31.977 adrfam: ipv4 00:27:31.977 subtype: current discovery subsystem 00:27:31.977 treq: not specified, sq flow control disable supported 00:27:31.977 portid: 1 00:27:31.977 trsvcid: 4420 00:27:31.977 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:31.977 traddr: 10.0.0.1 00:27:31.977 eflags: none 00:27:31.977 sectype: none 00:27:31.977 =====Discovery Log Entry 1====== 00:27:31.977 trtype: tcp 00:27:31.977 adrfam: ipv4 00:27:31.977 subtype: nvme subsystem 00:27:31.977 treq: not specified, sq flow control disable supported 00:27:31.977 portid: 1 00:27:31.977 trsvcid: 4420 00:27:31.977 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:31.977 traddr: 10.0.0.1 00:27:31.977 eflags: none 00:27:31.977 sectype: none 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.977 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.237 nvme0n1 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.237 13:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.237 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.498 nvme0n1 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.498 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.499 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.499 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.499 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.759 nvme0n1 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.759 nvme0n1 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.759 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.019 nvme0n1 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.019 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.279 13:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.279 nvme0n1 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.279 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:33.280 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.566 nvme0n1 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:33.566 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.567 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.826 nvme0n1 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.827 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.087 nvme0n1 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.087 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.347 13:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.347 nvme0n1 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.347 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.607 nvme0n1 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.607 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.608 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.867 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.867 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.868 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.128 nvme0n1 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.128 13:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.387 nvme0n1 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.387 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.388 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.648 nvme0n1 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.648 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.908 nvme0n1 00:27:35.908 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.908 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.908 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.908 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.908 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.908 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.168 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.169 13:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.429 nvme0n1 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.429 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.000 nvme0n1 00:27:37.000 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.000 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.001 13:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.260 nvme0n1 00:27:37.260 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.260 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.260 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.260 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.260 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.260 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:37.520 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.521 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.780 nvme0n1 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.780 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.781 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.040 13:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.299 nvme0n1 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.299 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.300 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.870 nvme0n1 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.870 13:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.442 nvme0n1 00:27:39.442 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.442 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.442 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.442 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.442 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.442 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:39.702 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.703 13:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.273 nvme0n1 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.273 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.274 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.212 nvme0n1 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.212 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.213 13:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.780 nvme0n1 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.780 13:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.348 nvme0n1 00:27:42.348 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.348 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.348 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.348 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.348 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.348 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:42.608 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.609 nvme0n1 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.609 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.869 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.870 nvme0n1 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.870 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.236 nvme0n1 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.236 13:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.236 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.496 nvme0n1 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.496 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.497 nvme0n1 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.497 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.758 nvme0n1 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.758 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.018 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.019 nvme0n1 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.019 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.278 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.279 13:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.279 nvme0n1 00:27:44.279 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.279 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.279 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.279 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.279 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.539 nvme0n1 00:27:44.539 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.799 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.060 nvme0n1 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.060 13:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.321 nvme0n1 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.321 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.583 nvme0n1 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.583 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.843 nvme0n1 00:27:45.843 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.843 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.843 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.843 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.843 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.104 13:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.364 nvme0n1 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.364 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.624 nvme0n1 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.624 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.625 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.193 nvme0n1 00:27:47.193 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.193 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.193 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.193 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.193 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.193 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.194 13:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.768 nvme0n1 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.768 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.028 nvme0n1 00:27:48.028 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.028 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.028 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.028 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.028 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.028 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.288 13:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.548 nvme0n1 00:27:48.548 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.548 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.548 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.548 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.548 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.548 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.807 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.067 nvme0n1 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.067 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.327 13:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.897 nvme0n1 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.897 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.898 13:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.466 nvme0n1 00:27:50.466 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.466 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.466 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.466 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.466 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.466 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.761 13:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.387 nvme0n1 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.387 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.388 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.957 nvme0n1 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.957 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:51.958 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.217 13:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.799 nvme0n1 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.799 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.058 nvme0n1 00:27:53.058 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.058 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.058 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.059 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.320 nvme0n1 00:27:53.320 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.320 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.320 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.320 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.320 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.320 13:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.320 nvme0n1 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.320 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.581 nvme0n1 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.581 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.841 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.842 nvme0n1 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.842 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.102 nvme0n1 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.102 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.103 13:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.363 nvme0n1 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.363 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.625 nvme0n1 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.625 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.886 nvme0n1 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.886 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.147 nvme0n1 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.147 13:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.147 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.147 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.147 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.147 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.408 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.669 nvme0n1 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.669 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.929 nvme0n1 00:27:55.929 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.929 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.929 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.929 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.929 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.929 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.929 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.929 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.929 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.930 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.190 nvme0n1 00:27:56.190 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.190 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.190 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.190 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.190 13:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:56.190 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.191 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.452 nvme0n1 00:27:56.452 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.452 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.452 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.452 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.452 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.452 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.712 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.712 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.712 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.712 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.712 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.713 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.974 nvme0n1 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:56.974 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.975 13:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.546 nvme0n1 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.546 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.807 nvme0n1 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.807 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.067 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.068 13:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.328 nvme0n1 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.328 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.588 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.848 nvme0n1 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.848 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.849 13:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.421 nvme0n1 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2I5ZDdjMDU3N2U2MmNhNWNmOGQyMTM0N2Y2MzdkYjl4eOOD: 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: ]] 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGNlOTdmZjQ3YjhiNWRhMTk2OTg2MzQ2NTE4ZGM2MDE3ZmEwMTdkMjY2YjBjMzg4MjIyNDNlOTBkNWFkNzk1ZmPn0Ms=: 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.421 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.992 nvme0n1 00:27:59.992 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.992 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.992 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.992 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.992 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.992 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.252 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.253 13:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.824 nvme0n1 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.824 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.825 13:24:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.767 nvme0n1 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzY1OTg5YjZjODEwN2EzNWVkZTJiNmUzNWM5ZDFjOTlmNjZjZmE2MmU2NjZmMDAxhdPtQA==: 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: ]] 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkNjJiNWZhMzdjYzMzMGUyYmJjMGM0YzFkZGU5ZDM7luou: 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:01.767 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.768 13:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.338 nvme0n1 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.338 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I0ZDI1N2RkMWE2OGZhMWI3ZjIxZTA5NDA3MjA5NjczZmRjNzNlZTQwZDk5OTQwZDFlYzgzMjJmYWZjMDY5MujOkNE=: 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.339 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.910 nvme0n1 00:28:02.910 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.910 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.910 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.910 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.910 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.910 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:03.170 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.171 request: 00:28:03.171 { 00:28:03.171 "name": "nvme0", 00:28:03.171 "trtype": "tcp", 00:28:03.171 "traddr": "10.0.0.1", 00:28:03.171 "adrfam": "ipv4", 00:28:03.171 "trsvcid": "4420", 00:28:03.171 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:03.171 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:03.171 "prchk_reftag": false, 00:28:03.171 "prchk_guard": false, 00:28:03.171 "hdgst": false, 00:28:03.171 "ddgst": false, 00:28:03.171 "allow_unrecognized_csi": false, 00:28:03.171 "method": "bdev_nvme_attach_controller", 00:28:03.171 "req_id": 1 00:28:03.171 } 00:28:03.171 Got JSON-RPC error response 00:28:03.171 response: 00:28:03.171 { 00:28:03.171 "code": -5, 00:28:03.171 "message": "Input/output error" 00:28:03.171 } 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.171 13:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.171 request: 00:28:03.171 { 00:28:03.171 "name": "nvme0", 00:28:03.171 "trtype": "tcp", 00:28:03.171 "traddr": "10.0.0.1", 00:28:03.171 "adrfam": "ipv4", 00:28:03.171 "trsvcid": "4420", 00:28:03.171 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:03.171 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:03.171 "prchk_reftag": false, 00:28:03.171 "prchk_guard": false, 00:28:03.171 "hdgst": false, 00:28:03.171 "ddgst": false, 00:28:03.171 "dhchap_key": "key2", 00:28:03.171 "allow_unrecognized_csi": false, 00:28:03.171 "method": "bdev_nvme_attach_controller", 00:28:03.171 "req_id": 1 00:28:03.171 } 00:28:03.171 Got JSON-RPC error response 00:28:03.171 response: 00:28:03.171 { 00:28:03.171 "code": -5, 00:28:03.171 "message": "Input/output error" 00:28:03.171 } 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.171 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.432 request: 00:28:03.432 { 00:28:03.432 "name": "nvme0", 00:28:03.432 "trtype": "tcp", 00:28:03.432 "traddr": "10.0.0.1", 00:28:03.432 "adrfam": "ipv4", 00:28:03.432 "trsvcid": "4420", 00:28:03.432 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:03.432 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:03.432 "prchk_reftag": false, 00:28:03.432 "prchk_guard": false, 00:28:03.432 "hdgst": false, 00:28:03.432 "ddgst": false, 00:28:03.432 "dhchap_key": "key1", 00:28:03.432 "dhchap_ctrlr_key": "ckey2", 00:28:03.432 "allow_unrecognized_csi": false, 00:28:03.432 "method": "bdev_nvme_attach_controller", 00:28:03.432 "req_id": 1 00:28:03.432 } 00:28:03.432 Got JSON-RPC error response 00:28:03.432 response: 00:28:03.432 { 00:28:03.432 "code": -5, 00:28:03.432 "message": "Input/output error" 00:28:03.432 } 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.432 nvme0n1 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.432 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.692 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.692 request: 00:28:03.692 { 00:28:03.692 "name": "nvme0", 00:28:03.692 "dhchap_key": "key1", 00:28:03.692 "dhchap_ctrlr_key": "ckey2", 00:28:03.692 "method": "bdev_nvme_set_keys", 00:28:03.692 "req_id": 1 00:28:03.692 } 00:28:03.692 Got JSON-RPC error response 00:28:03.692 response: 00:28:03.692 { 00:28:03.693 "code": -13, 00:28:03.693 "message": "Permission denied" 00:28:03.693 } 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:03.693 13:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:05.074 13:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.074 13:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:05.074 13:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.074 13:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.074 13:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.074 13:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:05.074 13:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM4MDIxYTIxNzljYzc2NzQ0ODYwNjk3NjQxNzQwMWEyOTJhZTljNmQ2YzU5NjMzehNNkA==: 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: ]] 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmZjZjczOGIyOWYyMGI0MjdmZDdhNmFlNjFkMTY4NjZjMjM2NjAxYjliYTMyOTc0EALHIg==: 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.016 nvme0n1 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWY1YzdkNGNhZmEyYjhhOGQwNWFmMjQ5NmVhZDk5MmFmCS3C: 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: ]] 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDMwMWI5MWU1NTRiMDIyOTg0YzBjNmMzM2IxMmE2OGS6vvh0: 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:06.016 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.017 request: 00:28:06.017 { 00:28:06.017 "name": "nvme0", 00:28:06.017 "dhchap_key": "key2", 00:28:06.017 "dhchap_ctrlr_key": "ckey1", 00:28:06.017 "method": "bdev_nvme_set_keys", 00:28:06.017 "req_id": 1 00:28:06.017 } 00:28:06.017 Got JSON-RPC error response 00:28:06.017 response: 00:28:06.017 { 00:28:06.017 "code": -13, 00:28:06.017 "message": "Permission denied" 00:28:06.017 } 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.017 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.277 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:06.277 13:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:07.217 13:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:07.217 rmmod nvme_tcp 00:28:07.217 rmmod nvme_fabrics 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1881075 ']' 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1881075 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 1881075 ']' 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 1881075 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1881075 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1881075' 00:28:07.217 killing process with pid 1881075 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 1881075 00:28:07.217 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 1881075 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.477 13:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.386 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.386 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:09.386 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:09.386 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:09.386 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:09.386 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:09.386 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:09.386 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:09.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:09.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:09.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:09.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:09.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:12.945 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:12.945 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:12.945 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:13.206 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:13.777 13:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.EFz /tmp/spdk.key-null.D4j /tmp/spdk.key-sha256.jwO /tmp/spdk.key-sha384.O4B /tmp/spdk.key-sha512.HUA /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:13.777 13:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:17.075 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:17.075 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:17.075 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:17.646 00:28:17.646 real 1m1.042s 00:28:17.646 user 0m54.807s 00:28:17.646 sys 0m16.124s 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.646 ************************************ 00:28:17.646 END TEST nvmf_auth_host 00:28:17.646 ************************************ 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.646 ************************************ 00:28:17.646 START TEST nvmf_digest 00:28:17.646 ************************************ 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:17.646 * Looking for test storage... 00:28:17.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.646 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:17.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.907 --rc genhtml_branch_coverage=1 00:28:17.907 --rc genhtml_function_coverage=1 00:28:17.907 --rc genhtml_legend=1 00:28:17.907 --rc geninfo_all_blocks=1 00:28:17.907 --rc geninfo_unexecuted_blocks=1 00:28:17.907 00:28:17.907 ' 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:17.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.907 --rc genhtml_branch_coverage=1 00:28:17.907 --rc genhtml_function_coverage=1 00:28:17.907 --rc genhtml_legend=1 00:28:17.907 --rc geninfo_all_blocks=1 00:28:17.907 --rc geninfo_unexecuted_blocks=1 00:28:17.907 00:28:17.907 ' 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:17.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.907 --rc genhtml_branch_coverage=1 00:28:17.907 --rc genhtml_function_coverage=1 00:28:17.907 --rc genhtml_legend=1 00:28:17.907 --rc geninfo_all_blocks=1 00:28:17.907 --rc geninfo_unexecuted_blocks=1 00:28:17.907 00:28:17.907 ' 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:17.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.907 --rc genhtml_branch_coverage=1 00:28:17.907 --rc genhtml_function_coverage=1 00:28:17.907 --rc genhtml_legend=1 00:28:17.907 --rc geninfo_all_blocks=1 00:28:17.907 --rc geninfo_unexecuted_blocks=1 00:28:17.907 00:28:17.907 ' 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:17.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:17.907 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.908 13:24:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:26.044 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:26.044 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.044 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:26.045 Found net devices under 0000:31:00.0: cvl_0_0 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:26.045 Found net devices under 0000:31:00.1: cvl_0_1 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.045 13:25:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:28:26.045 00:28:26.045 --- 10.0.0.2 ping statistics --- 00:28:26.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.045 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:28:26.045 00:28:26.045 --- 10.0.0.1 ping statistics --- 00:28:26.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.045 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:26.045 ************************************ 00:28:26.045 START TEST nvmf_digest_clean 00:28:26.045 ************************************ 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1898595 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1898595 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1898595 ']' 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:26.045 13:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.045 [2024-11-06 13:25:07.306111] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:26.045 [2024-11-06 13:25:07.306172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.045 [2024-11-06 13:25:07.405053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.045 [2024-11-06 13:25:07.455304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.045 [2024-11-06 13:25:07.455352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.045 [2024-11-06 13:25:07.455361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.045 [2024-11-06 13:25:07.455368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.045 [2024-11-06 13:25:07.455375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.045 [2024-11-06 13:25:07.456158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.305 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:26.305 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:26.305 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.305 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.305 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.305 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.305 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:26.306 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:26.306 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:26.306 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.306 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.566 null0 00:28:26.566 [2024-11-06 13:25:08.262652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.566 [2024-11-06 13:25:08.286960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1898938 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1898938 /var/tmp/bperf.sock 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1898938 ']' 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:26.566 13:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.566 [2024-11-06 13:25:08.345708] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:26.566 [2024-11-06 13:25:08.345776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898938 ] 00:28:26.566 [2024-11-06 13:25:08.438843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.827 [2024-11-06 13:25:08.489940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.398 13:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:27.398 13:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:27.398 13:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:27.398 13:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:27.398 13:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:27.659 13:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.659 13:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.920 nvme0n1 00:28:28.180 13:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:28.180 13:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.180 Running I/O for 2 seconds... 00:28:30.061 18686.00 IOPS, 72.99 MiB/s [2024-11-06T12:25:11.963Z] 19345.00 IOPS, 75.57 MiB/s 00:28:30.061 Latency(us) 00:28:30.061 [2024-11-06T12:25:11.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.061 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:30.061 nvme0n1 : 2.01 19357.19 75.61 0.00 0.00 6605.01 2976.43 18240.85 00:28:30.061 [2024-11-06T12:25:11.963Z] =================================================================================================================== 00:28:30.061 [2024-11-06T12:25:11.963Z] Total : 19357.19 75.61 0.00 0.00 6605.01 2976.43 18240.85 00:28:30.061 { 00:28:30.061 "results": [ 00:28:30.061 { 00:28:30.061 "job": "nvme0n1", 00:28:30.061 "core_mask": "0x2", 00:28:30.061 "workload": "randread", 00:28:30.061 "status": "finished", 00:28:30.061 "queue_depth": 128, 00:28:30.061 "io_size": 4096, 00:28:30.061 "runtime": 2.005353, 00:28:30.061 "iops": 19357.190479681132, 00:28:30.061 "mibps": 75.61402531125442, 00:28:30.061 "io_failed": 0, 00:28:30.061 "io_timeout": 0, 00:28:30.061 "avg_latency_us": 6605.00857402923, 00:28:30.061 "min_latency_us": 2976.4266666666667, 00:28:30.061 "max_latency_us": 18240.853333333333 00:28:30.061 } 00:28:30.061 ], 00:28:30.061 "core_count": 1 00:28:30.061 } 00:28:30.061 13:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:30.061 13:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:30.061 13:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:30.061 13:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:30.061 | select(.opcode=="crc32c") 00:28:30.061 | "\(.module_name) \(.executed)"' 00:28:30.061 13:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1898938 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1898938 ']' 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1898938 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1898938 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1898938' 00:28:30.321 killing process with pid 1898938 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1898938 00:28:30.321 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.321 00:28:30.321 Latency(us) 00:28:30.321 [2024-11-06T12:25:12.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.321 [2024-11-06T12:25:12.223Z] =================================================================================================================== 00:28:30.321 [2024-11-06T12:25:12.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.321 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1898938 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1899630 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1899630 /var/tmp/bperf.sock 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1899630 ']' 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:30.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:30.581 13:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.581 [2024-11-06 13:25:12.347481] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:30.581 [2024-11-06 13:25:12.347536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899630 ] 00:28:30.581 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:30.581 Zero copy mechanism will not be used. 00:28:30.581 [2024-11-06 13:25:12.429801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.581 [2024-11-06 13:25:12.459139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.521 13:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:31.521 13:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:31.521 13:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:31.521 13:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:31.521 13:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:31.521 13:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.521 13:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.781 nvme0n1 00:28:31.781 13:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:31.781 13:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:32.041 Zero copy mechanism will not be used. 00:28:32.041 Running I/O for 2 seconds... 00:28:33.922 4631.00 IOPS, 578.88 MiB/s [2024-11-06T12:25:15.824Z] 4548.50 IOPS, 568.56 MiB/s 00:28:33.922 Latency(us) 00:28:33.922 [2024-11-06T12:25:15.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.922 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:33.922 nvme0n1 : 2.00 4553.21 569.15 0.00 0.00 3511.89 716.80 7809.71 00:28:33.922 [2024-11-06T12:25:15.824Z] =================================================================================================================== 00:28:33.922 [2024-11-06T12:25:15.824Z] Total : 4553.21 569.15 0.00 0.00 3511.89 716.80 7809.71 00:28:33.922 { 00:28:33.922 "results": [ 00:28:33.922 { 00:28:33.922 "job": "nvme0n1", 00:28:33.922 "core_mask": "0x2", 00:28:33.922 "workload": "randread", 00:28:33.922 "status": "finished", 00:28:33.922 "queue_depth": 16, 00:28:33.922 "io_size": 131072, 00:28:33.922 "runtime": 2.001445, 00:28:33.922 "iops": 4553.2103055542375, 00:28:33.922 "mibps": 569.1512881942797, 00:28:33.922 "io_failed": 0, 00:28:33.922 "io_timeout": 0, 00:28:33.922 "avg_latency_us": 3511.8871999707376, 00:28:33.922 "min_latency_us": 716.8, 00:28:33.922 "max_latency_us": 7809.706666666667 00:28:33.922 } 00:28:33.922 ], 00:28:33.922 "core_count": 1 00:28:33.922 } 00:28:33.922 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:33.922 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:33.922 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:33.922 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:33.922 | select(.opcode=="crc32c") 00:28:33.922 | "\(.module_name) \(.executed)"' 00:28:33.922 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1899630 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1899630 ']' 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1899630 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:34.182 13:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1899630 00:28:34.182 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:34.182 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:34.182 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1899630' 00:28:34.182 killing process with pid 1899630 00:28:34.182 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1899630 00:28:34.182 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.182 00:28:34.182 Latency(us) 00:28:34.182 [2024-11-06T12:25:16.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.182 [2024-11-06T12:25:16.084Z] =================================================================================================================== 00:28:34.182 [2024-11-06T12:25:16.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.182 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1899630 00:28:34.442 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:34.442 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:34.442 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:34.442 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:34.442 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:34.442 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:34.442 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:34.442 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1900316 00:28:34.442 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1900316 /var/tmp/bperf.sock 00:28:34.443 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1900316 ']' 00:28:34.443 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:34.443 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:34.443 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:34.443 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:34.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:34.443 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:34.443 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:34.443 [2024-11-06 13:25:16.158657] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:34.443 [2024-11-06 13:25:16.158730] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1900316 ] 00:28:34.443 [2024-11-06 13:25:16.245446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.443 [2024-11-06 13:25:16.274552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.382 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:35.382 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:35.382 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:35.382 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:35.382 13:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:35.382 13:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.383 13:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:35.643 nvme0n1 00:28:35.643 13:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:35.643 13:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:35.643 Running I/O for 2 seconds... 00:28:37.970 29223.00 IOPS, 114.15 MiB/s [2024-11-06T12:25:19.872Z] 29371.50 IOPS, 114.73 MiB/s 00:28:37.970 Latency(us) 00:28:37.970 [2024-11-06T12:25:19.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.970 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:37.970 nvme0n1 : 2.01 29373.01 114.74 0.00 0.00 4350.56 3181.23 13762.56 00:28:37.970 [2024-11-06T12:25:19.872Z] =================================================================================================================== 00:28:37.970 [2024-11-06T12:25:19.872Z] Total : 29373.01 114.74 0.00 0.00 4350.56 3181.23 13762.56 00:28:37.970 { 00:28:37.970 "results": [ 00:28:37.970 { 00:28:37.970 "job": "nvme0n1", 00:28:37.970 "core_mask": "0x2", 00:28:37.970 "workload": "randwrite", 00:28:37.970 "status": "finished", 00:28:37.970 "queue_depth": 128, 00:28:37.970 "io_size": 4096, 00:28:37.970 "runtime": 2.005617, 00:28:37.970 "iops": 29373.005912893637, 00:28:37.970 "mibps": 114.73830434724077, 00:28:37.970 "io_failed": 0, 00:28:37.970 "io_timeout": 0, 00:28:37.970 "avg_latency_us": 4350.5565280960545, 00:28:37.970 "min_latency_us": 3181.2266666666665, 00:28:37.970 "max_latency_us": 13762.56 00:28:37.970 } 00:28:37.970 ], 00:28:37.970 "core_count": 1 00:28:37.970 } 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:37.970 | select(.opcode=="crc32c") 00:28:37.970 | "\(.module_name) \(.executed)"' 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1900316 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1900316 ']' 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1900316 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1900316 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1900316' 00:28:37.970 killing process with pid 1900316 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1900316 00:28:37.970 Received shutdown signal, test time was about 2.000000 seconds 00:28:37.970 00:28:37.970 Latency(us) 00:28:37.970 [2024-11-06T12:25:19.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.970 [2024-11-06T12:25:19.872Z] =================================================================================================================== 00:28:37.970 [2024-11-06T12:25:19.872Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.970 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1900316 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1901002 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1901002 /var/tmp/bperf.sock 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1901002 ']' 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:38.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:38.230 13:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.230 [2024-11-06 13:25:19.951130] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:38.231 [2024-11-06 13:25:19.951186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901002 ] 00:28:38.231 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.231 Zero copy mechanism will not be used. 00:28:38.231 [2024-11-06 13:25:20.036844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.231 [2024-11-06 13:25:20.068418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.234 13:25:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:39.234 13:25:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:39.234 13:25:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:39.234 13:25:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:39.234 13:25:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:39.234 13:25:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.234 13:25:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.494 nvme0n1 00:28:39.494 13:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:39.494 13:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:39.753 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:39.753 Zero copy mechanism will not be used. 00:28:39.753 Running I/O for 2 seconds... 00:28:41.632 7032.00 IOPS, 879.00 MiB/s [2024-11-06T12:25:23.534Z] 7675.00 IOPS, 959.38 MiB/s 00:28:41.632 Latency(us) 00:28:41.632 [2024-11-06T12:25:23.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.632 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:41.632 nvme0n1 : 2.00 7668.17 958.52 0.00 0.00 2083.01 1133.23 13489.49 00:28:41.632 [2024-11-06T12:25:23.534Z] =================================================================================================================== 00:28:41.632 [2024-11-06T12:25:23.534Z] Total : 7668.17 958.52 0.00 0.00 2083.01 1133.23 13489.49 00:28:41.632 { 00:28:41.632 "results": [ 00:28:41.632 { 00:28:41.632 "job": "nvme0n1", 00:28:41.632 "core_mask": "0x2", 00:28:41.632 "workload": "randwrite", 00:28:41.632 "status": "finished", 00:28:41.632 "queue_depth": 16, 00:28:41.632 "io_size": 131072, 00:28:41.632 "runtime": 2.004258, 00:28:41.632 "iops": 7668.174456581937, 00:28:41.632 "mibps": 958.5218070727421, 00:28:41.632 "io_failed": 0, 00:28:41.632 "io_timeout": 0, 00:28:41.632 "avg_latency_us": 2083.009042444748, 00:28:41.632 "min_latency_us": 1133.2266666666667, 00:28:41.632 "max_latency_us": 13489.493333333334 00:28:41.632 } 00:28:41.632 ], 00:28:41.632 "core_count": 1 00:28:41.632 } 00:28:41.632 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:41.632 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:41.632 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:41.632 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:41.632 | select(.opcode=="crc32c") 00:28:41.632 | "\(.module_name) \(.executed)"' 00:28:41.632 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:41.892 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:41.892 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:41.892 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:41.892 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1901002 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1901002 ']' 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1901002 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1901002 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1901002' 00:28:41.893 killing process with pid 1901002 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1901002 00:28:41.893 Received shutdown signal, test time was about 2.000000 seconds 00:28:41.893 00:28:41.893 Latency(us) 00:28:41.893 [2024-11-06T12:25:23.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.893 [2024-11-06T12:25:23.795Z] =================================================================================================================== 00:28:41.893 [2024-11-06T12:25:23.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1901002 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1898595 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1898595 ']' 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1898595 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:41.893 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1898595 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1898595' 00:28:42.153 killing process with pid 1898595 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1898595 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1898595 00:28:42.153 00:28:42.153 real 0m16.713s 00:28:42.153 user 0m32.875s 00:28:42.153 sys 0m3.887s 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:42.153 ************************************ 00:28:42.153 END TEST nvmf_digest_clean 00:28:42.153 ************************************ 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:42.153 13:25:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:42.153 ************************************ 00:28:42.153 START TEST nvmf_digest_error 00:28:42.153 ************************************ 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1901917 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1901917 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1901917 ']' 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:42.153 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.413 [2024-11-06 13:25:24.096355] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:42.413 [2024-11-06 13:25:24.096413] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.413 [2024-11-06 13:25:24.191559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.413 [2024-11-06 13:25:24.223554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.413 [2024-11-06 13:25:24.223582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.413 [2024-11-06 13:25:24.223588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.413 [2024-11-06 13:25:24.223592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.413 [2024-11-06 13:25:24.223597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.413 [2024-11-06 13:25:24.224088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.982 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:42.982 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.242 [2024-11-06 13:25:24.926022] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.242 13:25:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.242 null0 00:28:43.242 [2024-11-06 13:25:25.004079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.242 [2024-11-06 13:25:25.028277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.242 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.242 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:43.242 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1902069 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1902069 /var/tmp/bperf.sock 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1902069 ']' 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:43.243 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.243 [2024-11-06 13:25:25.082822] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:43.243 [2024-11-06 13:25:25.082869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1902069 ] 00:28:43.502 [2024-11-06 13:25:25.166559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.502 [2024-11-06 13:25:25.196556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.073 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:44.073 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:44.073 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.073 13:25:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.334 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.334 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.334 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.334 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.334 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.334 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.595 nvme0n1 00:28:44.595 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:44.595 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.595 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.595 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.595 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:44.595 13:25:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.856 Running I/O for 2 seconds... 00:28:44.856 [2024-11-06 13:25:26.521343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.856 [2024-11-06 13:25:26.521374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.856 [2024-11-06 13:25:26.521384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.856 [2024-11-06 13:25:26.533002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.856 [2024-11-06 13:25:26.533023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.856 [2024-11-06 13:25:26.533031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.856 [2024-11-06 13:25:26.542344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.856 [2024-11-06 13:25:26.542363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.856 [2024-11-06 13:25:26.542370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.856 [2024-11-06 13:25:26.551262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.856 [2024-11-06 13:25:26.551281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.856 [2024-11-06 13:25:26.551288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.856 [2024-11-06 13:25:26.561314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.856 [2024-11-06 13:25:26.561332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.856 [2024-11-06 13:25:26.561338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.856 [2024-11-06 13:25:26.569641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.856 [2024-11-06 13:25:26.569658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.856 [2024-11-06 13:25:26.569665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.856 [2024-11-06 13:25:26.578959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.856 [2024-11-06 13:25:26.578977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.856 [2024-11-06 13:25:26.578984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.856 [2024-11-06 13:25:26.587937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.856 [2024-11-06 13:25:26.587954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.856 [2024-11-06 13:25:26.587961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.597062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.597080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.597087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.606675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.606692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.606699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.615230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.615247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.615254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.623376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.623392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.623403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.632184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.632202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.632208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.641350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.641368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.641374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.650231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.650248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.650254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.658724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.658742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.658752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.667937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.667954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.667960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.677528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.677545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.677552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.686128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.686145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.686152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.693915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.693933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.693939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.702563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.702584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.702590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.712266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.712283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.712290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.721907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.721924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.721931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.730806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.730822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.730829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.741572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.741589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.741596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.857 [2024-11-06 13:25:26.752664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:44.857 [2024-11-06 13:25:26.752682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.857 [2024-11-06 13:25:26.752688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.118 [2024-11-06 13:25:26.765335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.118 [2024-11-06 13:25:26.765352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.118 [2024-11-06 13:25:26.765359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.118 [2024-11-06 13:25:26.773440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.118 [2024-11-06 13:25:26.773456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.118 [2024-11-06 13:25:26.773463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.118 [2024-11-06 13:25:26.785155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.118 [2024-11-06 13:25:26.785172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.118 [2024-11-06 13:25:26.785179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.118 [2024-11-06 13:25:26.796523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.118 [2024-11-06 13:25:26.796540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.118 [2024-11-06 13:25:26.796546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.118 [2024-11-06 13:25:26.806464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.118 [2024-11-06 13:25:26.806481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.118 [2024-11-06 13:25:26.806492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.118 [2024-11-06 13:25:26.814950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.118 [2024-11-06 13:25:26.814968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.814978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.823564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.823582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.823589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.832230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.832248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.832255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.840867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.840884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.840890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.852158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.852176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.852182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.860243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.860261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.860268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.869833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.869853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.869860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.878814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.878831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.878838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.887779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.887797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.887803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.896639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.896657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.896663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.905866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.905884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.905890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.913616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.913633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.913640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.922656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.922673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.922680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.931377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.931394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.931400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.940530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.940548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.940554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.949001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.949019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.949025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.958293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.958311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.958320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.967241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.967259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.967265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.975893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.975910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.975917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.985986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.986003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.986010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:26.995041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:26.995058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:26.995065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:27.002884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:27.002901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:27.002907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.119 [2024-11-06 13:25:27.012185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.119 [2024-11-06 13:25:27.012203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.119 [2024-11-06 13:25:27.012210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.021375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.021394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.021405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.030572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.030589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.030596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.039344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.039361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.039367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.047952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.047970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.047977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.056770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.056788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.056795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.065279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.065296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.065303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.074105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.074122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.074129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.083483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.083501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.083507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.094006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.094023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.094030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.102041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.102063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.102070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.111355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.111373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.111379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.119513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.119531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.119538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.128535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.128553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.128560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.137660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.137678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.137685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.147159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.147176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.147183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.155932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.155951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.155958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.164448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.381 [2024-11-06 13:25:27.164466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.381 [2024-11-06 13:25:27.164473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.381 [2024-11-06 13:25:27.173869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.173887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.173893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.182778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.182796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.182802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.191728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.191753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.191760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.201793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.201811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.201817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.209458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.209476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.209482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.218768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.218786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.218792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.227814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.227832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.227838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.236849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.236867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.236873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.245633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.245650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.245657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.254929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.254949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.254956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.263947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.263965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.263972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.382 [2024-11-06 13:25:27.272341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.382 [2024-11-06 13:25:27.272359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.382 [2024-11-06 13:25:27.272366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.282353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.282370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.282377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.290221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.290238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.290245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.299770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.299795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.299802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.308144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.308161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.308168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.317456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.317474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.317483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.326072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.326089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.326096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.335301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.335319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.335326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.344122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.344139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.344145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.353198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.353215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.353222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.361788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.361806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.361812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.371388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.371406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.371413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.379921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.379938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.379944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.388653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.388670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.388677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.398306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.644 [2024-11-06 13:25:27.398324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.644 [2024-11-06 13:25:27.398330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.644 [2024-11-06 13:25:27.406854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.406872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.406882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.416232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.416250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.416256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.425051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.425068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.425075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.432949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.432966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.432973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.442548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.442566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.442573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.451914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.451932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.451939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.460761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.460782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.460789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.469223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.469240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.469247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.478253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.478271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.478277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.487455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.487476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.487483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.496085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.496105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.496112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.504491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.504508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.504515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 27813.00 IOPS, 108.64 MiB/s [2024-11-06T12:25:27.547Z] [2024-11-06 13:25:27.514655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.514673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.514679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.524821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.524838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.524845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.533911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.533929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.533936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.645 [2024-11-06 13:25:27.542171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.645 [2024-11-06 13:25:27.542188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.645 [2024-11-06 13:25:27.542195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.907 [2024-11-06 13:25:27.550918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.907 [2024-11-06 13:25:27.550936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.907 [2024-11-06 13:25:27.550942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.907 [2024-11-06 13:25:27.559888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.907 [2024-11-06 13:25:27.559906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.907 [2024-11-06 13:25:27.559912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.907 [2024-11-06 13:25:27.568339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.907 [2024-11-06 13:25:27.568356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.907 [2024-11-06 13:25:27.568363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.907 [2024-11-06 13:25:27.577612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.907 [2024-11-06 13:25:27.577629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.907 [2024-11-06 13:25:27.577636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.907 [2024-11-06 13:25:27.586612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.907 [2024-11-06 13:25:27.586630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.907 [2024-11-06 13:25:27.586637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.907 [2024-11-06 13:25:27.595388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.907 [2024-11-06 13:25:27.595406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.907 [2024-11-06 13:25:27.595412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.907 [2024-11-06 13:25:27.604918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.907 [2024-11-06 13:25:27.604935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.907 [2024-11-06 13:25:27.604942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.614622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.614640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.614647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.623835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.623853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.623859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.633603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.633621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.633627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.643090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.643110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.643120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.652389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.652407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.652414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.661433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.661450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.661457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.670279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.670297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.670303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.678507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.678525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.678531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.688112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.688129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.688136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.696990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.697008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.697014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.706512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.706531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.706538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.716011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.716029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.716036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.728389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.728408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.728414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.737529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.737547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.737554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.745343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.745361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.745367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.755735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.755756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.755763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.766784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.766802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.766808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.775242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.775261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.775267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.784252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.784270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.784277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.793789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.793807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.793814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.908 [2024-11-06 13:25:27.803205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:45.908 [2024-11-06 13:25:27.803222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.908 [2024-11-06 13:25:27.803232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.811896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.811914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.811921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.821309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.821326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.821333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.831542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.831559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.831566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.843420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.843437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.843444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.851052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.851069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.851076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.861680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.861697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.861704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.870965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.870982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.870988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.880096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.880113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.880120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.889773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.889793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.889800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.897940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.897957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.897964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.907083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.907101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.907108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.915749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.915766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.915773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.925508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.925525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.925532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.933280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.933298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.933305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.943149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.943165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.943172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.951009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.951026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.951033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.961037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.961055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.961061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.969215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.169 [2024-11-06 13:25:27.969233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.169 [2024-11-06 13:25:27.969240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.169 [2024-11-06 13:25:27.979031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:27.979049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:27.979056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:27.987498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:27.987515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:27.987522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:27.996173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:27.996192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:27.996199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:28.005306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:28.005324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:28.005330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:28.014485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:28.014503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:28.014510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:28.022627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:28.022643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:28.022650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:28.031541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:28.031558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:28.031564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:28.040490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:28.040507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:28.040517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:28.049077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:28.049098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:28.049104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:28.057351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:28.057368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:28.057375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.170 [2024-11-06 13:25:28.066565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.170 [2024-11-06 13:25:28.066583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.170 [2024-11-06 13:25:28.066590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.076089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.076107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.076114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.084667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.084684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.084691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.093570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.093587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.093594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.103095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.103112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.103119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.111476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.111493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.111500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.120752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.120769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.120776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.129755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.129772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.129779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.139845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.139861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.139868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.147563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.147580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.147587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.157580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.157597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.157604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.165281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.165297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.165304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.174903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.174921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.174927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.184128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.184146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.184153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.192520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.192537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.192547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.200921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.200939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.200945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.209924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.209941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.209948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.219485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.431 [2024-11-06 13:25:28.219504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.431 [2024-11-06 13:25:28.219511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.431 [2024-11-06 13:25:28.226852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.226869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.226876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.236943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.236961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.236968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.245230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.245248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.245254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.254794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.254811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.254818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.262977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.262994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.263001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.271728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.271754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.271761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.281321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.281337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.281344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.288801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.288818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.288825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.300036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.300054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.300060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.308775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.308793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.308800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.317640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.317657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.317664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.432 [2024-11-06 13:25:28.325752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.432 [2024-11-06 13:25:28.325769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.432 [2024-11-06 13:25:28.325775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.335448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.335466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.335472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.347252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.347269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.347276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.356812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.356829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.356836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.364263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.364279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.364286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.373900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.373918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.373924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.383017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.383034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.383041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.393158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.393175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.393182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.401132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.401149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.401155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.412468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.412485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.412491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.424445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.424463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.424470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.436740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.436761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.436770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.446570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.446588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.446595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.455243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.455261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.455267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.463893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.463910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.463917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.472790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.472808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.472814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.481678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.481696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.481702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.490078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.490096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.490103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.499585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.499602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.499609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 [2024-11-06 13:25:28.509909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x137e1c0) 00:28:46.693 [2024-11-06 13:25:28.509929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.693 [2024-11-06 13:25:28.509936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.693 27794.50 IOPS, 108.57 MiB/s 00:28:46.693 Latency(us) 00:28:46.693 [2024-11-06T12:25:28.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.693 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:46.693 nvme0n1 : 2.00 27810.57 108.64 0.00 0.00 4597.77 2088.96 14527.15 00:28:46.693 [2024-11-06T12:25:28.595Z] =================================================================================================================== 00:28:46.693 [2024-11-06T12:25:28.595Z] Total : 27810.57 108.64 0.00 0.00 4597.77 2088.96 14527.15 00:28:46.693 { 00:28:46.693 "results": [ 00:28:46.693 { 00:28:46.693 "job": "nvme0n1", 00:28:46.693 "core_mask": "0x2", 00:28:46.693 "workload": "randread", 00:28:46.693 "status": "finished", 00:28:46.693 "queue_depth": 128, 00:28:46.693 "io_size": 4096, 00:28:46.693 "runtime": 2.004274, 00:28:46.693 "iops": 27810.568814443533, 00:28:46.693 "mibps": 108.63503443142005, 00:28:46.693 "io_failed": 0, 00:28:46.693 "io_timeout": 0, 00:28:46.693 "avg_latency_us": 4597.766246142806, 00:28:46.693 "min_latency_us": 2088.96, 00:28:46.693 "max_latency_us": 14527.146666666667 00:28:46.693 } 00:28:46.693 ], 00:28:46.693 "core_count": 1 00:28:46.693 } 00:28:46.693 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:46.693 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:46.693 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:46.693 | .driver_specific 00:28:46.693 | .nvme_error 00:28:46.693 | .status_code 00:28:46.693 | .command_transient_transport_error' 00:28:46.693 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1902069 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1902069 ']' 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1902069 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1902069 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1902069' 00:28:46.954 killing process with pid 1902069 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1902069 00:28:46.954 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.954 00:28:46.954 Latency(us) 00:28:46.954 [2024-11-06T12:25:28.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.954 [2024-11-06T12:25:28.856Z] =================================================================================================================== 00:28:46.954 [2024-11-06T12:25:28.856Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.954 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1902069 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1902759 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1902759 /var/tmp/bperf.sock 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1902759 ']' 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:47.214 13:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.214 [2024-11-06 13:25:28.930257] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:47.214 [2024-11-06 13:25:28.930312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1902759 ] 00:28:47.214 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.214 Zero copy mechanism will not be used. 00:28:47.214 [2024-11-06 13:25:29.017149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.214 [2024-11-06 13:25:29.046270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.153 13:25:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.413 nvme0n1 00:28:48.674 13:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:48.674 13:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.674 13:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:48.674 13:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.674 13:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:48.674 13:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.674 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:48.674 Zero copy mechanism will not be used. 00:28:48.674 Running I/O for 2 seconds... 00:28:48.674 [2024-11-06 13:25:30.431392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.431426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.431436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.440368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.440389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.440396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.449438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.449456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.449463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.460883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.460902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.460909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.472011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.472029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.472036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.482538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.482556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.482563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.491857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.491874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.491881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.501554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.501572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.501584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.511523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.511540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.511547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.520372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.520390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.520397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.530795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.530813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.530819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.543118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.543136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.543143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.555477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.555496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.555502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.674 [2024-11-06 13:25:30.565915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.674 [2024-11-06 13:25:30.565934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.674 [2024-11-06 13:25:30.565940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.578241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.578260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.578267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.590146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.590164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.590171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.601363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.601385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.601391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.613756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.613775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.613781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.621825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.621843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.621849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.632458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.632476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.632483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.641314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.641333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.641339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.653656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.653673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.653680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.664798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.664816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.664823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.676281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.936 [2024-11-06 13:25:30.676299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.936 [2024-11-06 13:25:30.676306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.936 [2024-11-06 13:25:30.686856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.686874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.686881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.699073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.699091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.699098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.706821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.706839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.706846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.711584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.711603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.711610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.722361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.722379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.722386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.734525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.734544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.734550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.747724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.747743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.747754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.758866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.758884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.758890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.768911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.768929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.768936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.775749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.775766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.775779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.786523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.786541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.786547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.797912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.797930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.797937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.809581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.809600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.809606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.822221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.822240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.822246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.937 [2024-11-06 13:25:30.835134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:48.937 [2024-11-06 13:25:30.835153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.937 [2024-11-06 13:25:30.835159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.198 [2024-11-06 13:25:30.845663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.198 [2024-11-06 13:25:30.845682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.198 [2024-11-06 13:25:30.845688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.198 [2024-11-06 13:25:30.856322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.198 [2024-11-06 13:25:30.856340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.856347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.867238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.867256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.867263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.876891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.876909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.876915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.887230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.887248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.887254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.897912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.897931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.897937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.908425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.908443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.908450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.920280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.920299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.920305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.930971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.930990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.930996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.941161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.941179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.941186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.948401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.948420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.948426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.960437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.960455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.960464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.968488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.968506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.968514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.974485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.974503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.974509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.979994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.980012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.980018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.985458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.985476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.985482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:30.996672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:30.996690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:30.996696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.005026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.005044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.005050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.016689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.016707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.016713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.024635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.024652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.024659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.036227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.036249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.036255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.045788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.045806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.045813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.057020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.057038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.057045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.065126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.065144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.065151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.071733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.071759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.071766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.079621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.079639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.079646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.084416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.084434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.084441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.199 [2024-11-06 13:25:31.086848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.199 [2024-11-06 13:25:31.086865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.199 [2024-11-06 13:25:31.086871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.460 [2024-11-06 13:25:31.098370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.460 [2024-11-06 13:25:31.098388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.460 [2024-11-06 13:25:31.098395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.460 [2024-11-06 13:25:31.108881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.460 [2024-11-06 13:25:31.108899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.460 [2024-11-06 13:25:31.108906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.460 [2024-11-06 13:25:31.119000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.460 [2024-11-06 13:25:31.119018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.460 [2024-11-06 13:25:31.119024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.460 [2024-11-06 13:25:31.129341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.460 [2024-11-06 13:25:31.129359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.460 [2024-11-06 13:25:31.129366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.460 [2024-11-06 13:25:31.139367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.460 [2024-11-06 13:25:31.139386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.460 [2024-11-06 13:25:31.139392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.150737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.150760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.150766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.160222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.160240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.160247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.170388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.170406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.170412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.175175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.175193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.175199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.184822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.184841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.184850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.195364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.195383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.195390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.206454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.206473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.206479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.217598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.217617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.217624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.228489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.228508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.228515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.239488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.239506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.239513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.250504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.250523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.250530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.260221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.260239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.260246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.267218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.267236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.267243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.277193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.277215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.277221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.286841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.286859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.286865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.297340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.297359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.297365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.308598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.308616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.308623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.320507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.320524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.320531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.332413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.332431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.332437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.344826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.344844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.344850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.461 [2024-11-06 13:25:31.356749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.461 [2024-11-06 13:25:31.356767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.461 [2024-11-06 13:25:31.356773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.365743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.365765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.365772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.372767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.372785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.372792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.379704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.379722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.379728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.389929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.389948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.389954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.399903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.399921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.399927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.409976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.409994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.410000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.420482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.420500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.420506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.723 3112.00 IOPS, 389.00 MiB/s [2024-11-06T12:25:31.625Z] [2024-11-06 13:25:31.432209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.432228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.432234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.444793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.444811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.444818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.457311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.457329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.457339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.469478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.469497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.469503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.474825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.474843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.474849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.483198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.483216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.483223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.493565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.493582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.493589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.502423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.502442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.502448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.513759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.513777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.513783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.522974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.522992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.522999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.531453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.531471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.531477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.541564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.541582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.541588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.552005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.552023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.552029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.560780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.560798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.560805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.573298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.573316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.573323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.723 [2024-11-06 13:25:31.585028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.723 [2024-11-06 13:25:31.585047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.723 [2024-11-06 13:25:31.585053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.724 [2024-11-06 13:25:31.593977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.724 [2024-11-06 13:25:31.593995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.724 [2024-11-06 13:25:31.594001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.724 [2024-11-06 13:25:31.604728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.724 [2024-11-06 13:25:31.604750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.724 [2024-11-06 13:25:31.604757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.724 [2024-11-06 13:25:31.615627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.724 [2024-11-06 13:25:31.615645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.724 [2024-11-06 13:25:31.615652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.622830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.622848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.622858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.633690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.633707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.633713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.638889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.638906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.638912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.650215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.650233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.650239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.659590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.659607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.659614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.670960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.670976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.670983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.679208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.679226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.679232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.685907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.685924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.685931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.695763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.695780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.695787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.706316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.706337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.706343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.718047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.718064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.718071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.729190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.729207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.729214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.738780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.738796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.738803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.748911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.748928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.748934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.760502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.760519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.760525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.772678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.772695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.772702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.784473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.784491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.985 [2024-11-06 13:25:31.784497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.985 [2024-11-06 13:25:31.797928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.985 [2024-11-06 13:25:31.797945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.986 [2024-11-06 13:25:31.797952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.986 [2024-11-06 13:25:31.809658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.986 [2024-11-06 13:25:31.809676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.986 [2024-11-06 13:25:31.809682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.986 [2024-11-06 13:25:31.821442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.986 [2024-11-06 13:25:31.821459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.986 [2024-11-06 13:25:31.821466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.986 [2024-11-06 13:25:31.833446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.986 [2024-11-06 13:25:31.833463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.986 [2024-11-06 13:25:31.833469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.986 [2024-11-06 13:25:31.847095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.986 [2024-11-06 13:25:31.847112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.986 [2024-11-06 13:25:31.847118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.986 [2024-11-06 13:25:31.858968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.986 [2024-11-06 13:25:31.858985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.986 [2024-11-06 13:25:31.858991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.986 [2024-11-06 13:25:31.871576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.986 [2024-11-06 13:25:31.871593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.986 [2024-11-06 13:25:31.871600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.986 [2024-11-06 13:25:31.883695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:49.986 [2024-11-06 13:25:31.883712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.986 [2024-11-06 13:25:31.883719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.895443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.895460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.895467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.908230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.908247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.908257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.919643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.919661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.919668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.930490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.930507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.930514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.941365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.941383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.941389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.951232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.951249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.951256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.963456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.963473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.963479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.976550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.976567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.976573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.986667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.986684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.986691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:31.996712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:31.996729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:31.996735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.006576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.006596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:32.006603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.014213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.014231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:32.014237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.024058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.024076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:32.024082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.033968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.033986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:32.033992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.044989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.045007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:32.045013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.056452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.056470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:32.056476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.066782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.066800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:32.066806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.077650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.077667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:32.077674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.086792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.086810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.249 [2024-11-06 13:25:32.086817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.249 [2024-11-06 13:25:32.096902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.249 [2024-11-06 13:25:32.096920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.250 [2024-11-06 13:25:32.096927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.250 [2024-11-06 13:25:32.108072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.250 [2024-11-06 13:25:32.108091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.250 [2024-11-06 13:25:32.108098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.250 [2024-11-06 13:25:32.115822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.250 [2024-11-06 13:25:32.115840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.250 [2024-11-06 13:25:32.115847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.250 [2024-11-06 13:25:32.121260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.250 [2024-11-06 13:25:32.121278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.250 [2024-11-06 13:25:32.121284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.250 [2024-11-06 13:25:32.132374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.250 [2024-11-06 13:25:32.132392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.250 [2024-11-06 13:25:32.132399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.250 [2024-11-06 13:25:32.144630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.250 [2024-11-06 13:25:32.144648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.250 [2024-11-06 13:25:32.144655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.156544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.156563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.156570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.168707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.168725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.168731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.180116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.180134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.180143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.191694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.191712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.191719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.196797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.196814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.196820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.202670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.202687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.202694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.211252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.211269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.211276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.215505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.215523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.215529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.224551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.224568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.224575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.234767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.234785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.234791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.244921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.244939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.244946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.250060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.250077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.250084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.260012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.260029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.260036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.271863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.271882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.271888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.283467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.283486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.283492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.292727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.292749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.292756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.300953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.300971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.300978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.309180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.309199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.309205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.319050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.319068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.319075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.329862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.329881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.329890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.341150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.341169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.341175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.352994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.353013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.353019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.364390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.364408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.364415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.512 [2024-11-06 13:25:32.375816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.512 [2024-11-06 13:25:32.375834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.512 [2024-11-06 13:25:32.375842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.513 [2024-11-06 13:25:32.386855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.513 [2024-11-06 13:25:32.386874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-11-06 13:25:32.386880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:50.513 [2024-11-06 13:25:32.398916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.513 [2024-11-06 13:25:32.398934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-11-06 13:25:32.398941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:50.513 [2024-11-06 13:25:32.409942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.513 [2024-11-06 13:25:32.409960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-11-06 13:25:32.409966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.774 [2024-11-06 13:25:32.421344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18e9a60) 00:28:50.774 [2024-11-06 13:25:32.421363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.774 [2024-11-06 13:25:32.421369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:50.774 3060.50 IOPS, 382.56 MiB/s 00:28:50.774 Latency(us) 00:28:50.774 [2024-11-06T12:25:32.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.774 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:50.774 nvme0n1 : 2.00 3061.35 382.67 0.00 0.00 5223.88 525.65 13161.81 00:28:50.774 [2024-11-06T12:25:32.676Z] =================================================================================================================== 00:28:50.774 [2024-11-06T12:25:32.676Z] Total : 3061.35 382.67 0.00 0.00 5223.88 525.65 13161.81 00:28:50.774 { 00:28:50.774 "results": [ 00:28:50.774 { 00:28:50.774 "job": "nvme0n1", 00:28:50.774 "core_mask": "0x2", 00:28:50.774 "workload": "randread", 00:28:50.774 "status": "finished", 00:28:50.774 "queue_depth": 16, 00:28:50.774 "io_size": 131072, 00:28:50.774 "runtime": 2.00467, 00:28:50.774 "iops": 3061.3517436785105, 00:28:50.774 "mibps": 382.6689679598138, 00:28:50.774 "io_failed": 0, 00:28:50.774 "io_timeout": 0, 00:28:50.774 "avg_latency_us": 5223.877516701972, 00:28:50.774 "min_latency_us": 525.6533333333333, 00:28:50.774 "max_latency_us": 13161.813333333334 00:28:50.774 } 00:28:50.774 ], 00:28:50.774 "core_count": 1 00:28:50.774 } 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:50.774 | .driver_specific 00:28:50.774 | .nvme_error 00:28:50.774 | .status_code 00:28:50.774 | .command_transient_transport_error' 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1902759 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1902759 ']' 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1902759 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:50.774 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1902759 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1902759' 00:28:51.034 killing process with pid 1902759 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1902759 00:28:51.034 Received shutdown signal, test time was about 2.000000 seconds 00:28:51.034 00:28:51.034 Latency(us) 00:28:51.034 [2024-11-06T12:25:32.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.034 [2024-11-06T12:25:32.936Z] =================================================================================================================== 00:28:51.034 [2024-11-06T12:25:32.936Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1902759 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1903624 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1903624 /var/tmp/bperf.sock 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1903624 ']' 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:51.034 13:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:51.034 [2024-11-06 13:25:32.844931] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:51.035 [2024-11-06 13:25:32.844988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903624 ] 00:28:51.035 [2024-11-06 13:25:32.926517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.295 [2024-11-06 13:25:32.956151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.865 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:51.865 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:51.865 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:51.865 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:52.126 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:52.126 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.126 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.126 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.126 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.126 13:25:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:52.387 nvme0n1 00:28:52.387 13:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:52.387 13:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.387 13:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:52.387 13:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.387 13:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:52.387 13:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.387 Running I/O for 2 seconds... 00:28:52.387 [2024-11-06 13:25:34.259537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f81e0 00:28:52.387 [2024-11-06 13:25:34.260281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.387 [2024-11-06 13:25:34.260309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:52.387 [2024-11-06 13:25:34.268237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f0bc0 00:28:52.387 [2024-11-06 13:25:34.268949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.387 [2024-11-06 13:25:34.268969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:52.387 [2024-11-06 13:25:34.276772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.387 [2024-11-06 13:25:34.277510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.387 [2024-11-06 13:25:34.277528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:52.387 [2024-11-06 13:25:34.285320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f81e0 00:28:52.387 [2024-11-06 13:25:34.286053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.387 [2024-11-06 13:25:34.286070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:52.648 [2024-11-06 13:25:34.293865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f0bc0 00:28:52.648 [2024-11-06 13:25:34.294596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.648 [2024-11-06 13:25:34.294613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:52.648 [2024-11-06 13:25:34.302394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.648 [2024-11-06 13:25:34.303129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.648 [2024-11-06 13:25:34.303146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:52.648 [2024-11-06 13:25:34.310903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f81e0 00:28:52.648 [2024-11-06 13:25:34.311589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.648 [2024-11-06 13:25:34.311606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:52.648 [2024-11-06 13:25:34.319672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.648 [2024-11-06 13:25:34.320291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.648 [2024-11-06 13:25:34.320308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.328182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f0bc0 00:28:52.649 [2024-11-06 13:25:34.328814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.328831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.336696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f81e0 00:28:52.649 [2024-11-06 13:25:34.337335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.337352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.345184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.649 [2024-11-06 13:25:34.345766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.345783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.353674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f0bc0 00:28:52.649 [2024-11-06 13:25:34.354310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.354327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.362154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f81e0 00:28:52.649 [2024-11-06 13:25:34.362781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.362798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.370629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.649 [2024-11-06 13:25:34.371255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.371272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.379122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f0bc0 00:28:52.649 [2024-11-06 13:25:34.379756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.379773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.387597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f81e0 00:28:52.649 [2024-11-06 13:25:34.388225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.388241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.396072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.649 [2024-11-06 13:25:34.396707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.396723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.404593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f0bc0 00:28:52.649 [2024-11-06 13:25:34.405235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.405252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.413092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f81e0 00:28:52.649 [2024-11-06 13:25:34.413725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.413742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.421965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.649 [2024-11-06 13:25:34.422767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.422783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.430344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:52.649 [2024-11-06 13:25:34.431207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.431224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.438826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:52.649 [2024-11-06 13:25:34.439671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.439687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.447301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:52.649 [2024-11-06 13:25:34.448202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.448219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.455791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:52.649 [2024-11-06 13:25:34.456629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.456645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.464250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:52.649 [2024-11-06 13:25:34.465095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.465111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.472712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:52.649 [2024-11-06 13:25:34.473562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.473583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.481192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:52.649 [2024-11-06 13:25:34.482046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.482063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.489658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.649 [2024-11-06 13:25:34.490501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.490517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.498139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:52.649 [2024-11-06 13:25:34.498972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.498988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.506607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:52.649 [2024-11-06 13:25:34.507445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.507462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.515065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:52.649 [2024-11-06 13:25:34.515911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.515928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.523555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:52.649 [2024-11-06 13:25:34.524415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.524432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.532028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:52.649 [2024-11-06 13:25:34.532882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.532898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.649 [2024-11-06 13:25:34.540496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:52.649 [2024-11-06 13:25:34.541345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.649 [2024-11-06 13:25:34.541361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.548957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:52.911 [2024-11-06 13:25:34.549808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.549825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.557422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.911 [2024-11-06 13:25:34.558268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.558285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.565898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:52.911 [2024-11-06 13:25:34.566756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.566773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.574378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:52.911 [2024-11-06 13:25:34.575183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.575200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.582861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:52.911 [2024-11-06 13:25:34.583700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.583716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.591327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:52.911 [2024-11-06 13:25:34.592173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.592189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.599779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:52.911 [2024-11-06 13:25:34.600626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.600642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.608251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:52.911 [2024-11-06 13:25:34.609055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.609072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.616739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:52.911 [2024-11-06 13:25:34.617603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.617620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.625279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.911 [2024-11-06 13:25:34.626144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.626161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.633754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:52.911 [2024-11-06 13:25:34.634596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.634612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.642213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:52.911 [2024-11-06 13:25:34.643068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.643085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.650686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:52.911 [2024-11-06 13:25:34.651536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.651553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.659211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:52.911 [2024-11-06 13:25:34.660033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.660050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.667727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:52.911 [2024-11-06 13:25:34.668579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.668596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.676216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:52.911 [2024-11-06 13:25:34.677060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.677076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.684686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:52.911 [2024-11-06 13:25:34.685554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.685570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.693153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.911 [2024-11-06 13:25:34.693958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.693974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.701619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:52.911 [2024-11-06 13:25:34.702467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.702483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.710107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:52.911 [2024-11-06 13:25:34.710965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.911 [2024-11-06 13:25:34.710982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.911 [2024-11-06 13:25:34.718569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:52.912 [2024-11-06 13:25:34.719434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.719451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.727054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:52.912 [2024-11-06 13:25:34.727907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.727924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.735507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:52.912 [2024-11-06 13:25:34.736352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.736368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.743963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:52.912 [2024-11-06 13:25:34.744826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.744842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.752521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:52.912 [2024-11-06 13:25:34.753362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.753379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.761012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:52.912 [2024-11-06 13:25:34.761870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.761886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.769474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:52.912 [2024-11-06 13:25:34.770338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.770358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.777974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:52.912 [2024-11-06 13:25:34.778810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.778827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.786430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:52.912 [2024-11-06 13:25:34.787282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.787299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.794919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:52.912 [2024-11-06 13:25:34.795778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.795795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:52.912 [2024-11-06 13:25:34.803416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:52.912 [2024-11-06 13:25:34.804267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.912 [2024-11-06 13:25:34.804283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.811894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:53.173 [2024-11-06 13:25:34.812752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.812769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.820375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:53.173 [2024-11-06 13:25:34.821215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.821231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.828836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:53.173 [2024-11-06 13:25:34.829695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.829711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.837299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:53.173 [2024-11-06 13:25:34.838152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.838168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.845773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:53.173 [2024-11-06 13:25:34.846618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.846634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.854259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:53.173 [2024-11-06 13:25:34.855124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.855141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.862743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:53.173 [2024-11-06 13:25:34.863591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.863607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.871205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:53.173 [2024-11-06 13:25:34.872068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.872085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.879672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:53.173 [2024-11-06 13:25:34.880525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.880542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.888143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:53.173 [2024-11-06 13:25:34.889001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.889017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.896631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:53.173 [2024-11-06 13:25:34.897497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.897514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.905294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:53.173 [2024-11-06 13:25:34.906129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.906145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.913778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:53.173 [2024-11-06 13:25:34.914615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.914631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.922237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:53.173 [2024-11-06 13:25:34.923076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.173 [2024-11-06 13:25:34.923093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.173 [2024-11-06 13:25:34.930696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:53.174 [2024-11-06 13:25:34.931563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:34.931580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:34.939159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:53.174 [2024-11-06 13:25:34.940039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:34.940055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:34.947632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:53.174 [2024-11-06 13:25:34.948478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:34.948494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:34.956140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:53.174 [2024-11-06 13:25:34.956957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:34.956974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:34.964624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:53.174 [2024-11-06 13:25:34.965426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:34.965443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:34.973095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:53.174 [2024-11-06 13:25:34.973967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:34.973984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:34.981568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:53.174 [2024-11-06 13:25:34.982416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:34.982433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:34.990047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:53.174 [2024-11-06 13:25:34.990892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:34.990911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:34.998521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:53.174 [2024-11-06 13:25:34.999371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:34.999387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:35.007020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:53.174 [2024-11-06 13:25:35.007860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:35.007876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:35.015486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:53.174 [2024-11-06 13:25:35.016341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:35.016358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:35.023950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:53.174 [2024-11-06 13:25:35.024787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:35.024803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:35.032432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:53.174 [2024-11-06 13:25:35.033298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:35.033314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:35.040917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:53.174 [2024-11-06 13:25:35.041782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:35.041799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:35.049407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:53.174 [2024-11-06 13:25:35.050263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:35.050279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:35.057868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:53.174 [2024-11-06 13:25:35.058730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:35.058749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.174 [2024-11-06 13:25:35.066324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:53.174 [2024-11-06 13:25:35.067183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.174 [2024-11-06 13:25:35.067199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.074798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:53.435 [2024-11-06 13:25:35.075640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.075655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.083284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:53.435 [2024-11-06 13:25:35.084135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.084152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.091772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:53.435 [2024-11-06 13:25:35.092614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.092630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.100235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:53.435 [2024-11-06 13:25:35.101083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.101099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.108776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:53.435 [2024-11-06 13:25:35.109617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.109633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.117228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:53.435 [2024-11-06 13:25:35.118099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.118115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.125694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:53.435 [2024-11-06 13:25:35.126562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.126578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.134191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:53.435 [2024-11-06 13:25:35.135044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.135060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.142658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:53.435 [2024-11-06 13:25:35.143518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.143534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.151127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:53.435 [2024-11-06 13:25:35.151945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.151961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.159591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:53.435 [2024-11-06 13:25:35.160462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.160479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.168062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:53.435 [2024-11-06 13:25:35.168861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.168878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.176530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:53.435 [2024-11-06 13:25:35.177386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.177402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.185001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166ed920 00:28:53.435 [2024-11-06 13:25:35.185851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.185868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.193473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166efae0 00:28:53.435 [2024-11-06 13:25:35.194282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.435 [2024-11-06 13:25:35.194298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.435 [2024-11-06 13:25:35.201961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f92c0 00:28:53.436 [2024-11-06 13:25:35.202817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.202833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.210422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f7100 00:28:53.436 [2024-11-06 13:25:35.211227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.211246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.218891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f4f40 00:28:53.436 [2024-11-06 13:25:35.219733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.219752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.227359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e6300 00:28:53.436 [2024-11-06 13:25:35.228200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.228217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.235837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:53.436 [2024-11-06 13:25:35.236684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.236700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.244313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166eb760 00:28:53.436 [2024-11-06 13:25:35.245157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.245173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:53.436 29989.00 IOPS, 117.14 MiB/s [2024-11-06T12:25:35.338Z] [2024-11-06 13:25:35.254004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166e4140 00:28:53.436 [2024-11-06 13:25:35.255297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.255313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.261420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.436 [2024-11-06 13:25:35.261642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.261658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.270125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.436 [2024-11-06 13:25:35.270381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.270398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.278916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.436 [2024-11-06 13:25:35.279156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.279171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.287635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.436 [2024-11-06 13:25:35.287866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.287882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.296347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.436 [2024-11-06 13:25:35.296649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.296666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.305093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.436 [2024-11-06 13:25:35.305320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.305336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.313894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.436 [2024-11-06 13:25:35.314129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.314154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.322617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.436 [2024-11-06 13:25:35.322883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.322899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.436 [2024-11-06 13:25:35.331347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.436 [2024-11-06 13:25:35.331588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.436 [2024-11-06 13:25:35.331604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.697 [2024-11-06 13:25:35.340061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.697 [2024-11-06 13:25:35.340309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.697 [2024-11-06 13:25:35.340325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.697 [2024-11-06 13:25:35.348838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.697 [2024-11-06 13:25:35.349065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.697 [2024-11-06 13:25:35.349080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.697 [2024-11-06 13:25:35.357567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.697 [2024-11-06 13:25:35.357856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.697 [2024-11-06 13:25:35.357873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.697 [2024-11-06 13:25:35.366321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.697 [2024-11-06 13:25:35.366551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.697 [2024-11-06 13:25:35.366567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.697 [2024-11-06 13:25:35.375018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.697 [2024-11-06 13:25:35.375257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.697 [2024-11-06 13:25:35.375273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.697 [2024-11-06 13:25:35.383836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.697 [2024-11-06 13:25:35.384106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.697 [2024-11-06 13:25:35.384123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.697 [2024-11-06 13:25:35.392552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.697 [2024-11-06 13:25:35.392821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.697 [2024-11-06 13:25:35.392838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.697 [2024-11-06 13:25:35.401297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.401540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.401556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.410048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.410311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.410328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.418861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.418959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.418975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.427647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.427896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.427911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.436385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.436574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.436592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.445078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.445326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.445342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.453893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.454080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.454095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.462605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.462818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.462833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.471394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.471643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.471660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.480090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.480326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.480343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.488841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.489053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.489068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.497556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.497659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.497675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.506304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.506526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.506541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.515039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.515294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.515311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.523803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.524034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.524050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.532565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.532817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.532832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.541275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.541500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.541515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.550035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.550270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.550287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.558756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.559013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.559029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.567488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.567768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.567784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.576192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.576430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.576447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.584902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.585147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.585163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.698 [2024-11-06 13:25:35.593648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.698 [2024-11-06 13:25:35.593919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.698 [2024-11-06 13:25:35.593936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.960 [2024-11-06 13:25:35.602408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.960 [2024-11-06 13:25:35.602637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.960 [2024-11-06 13:25:35.602653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.960 [2024-11-06 13:25:35.611170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.960 [2024-11-06 13:25:35.611372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.960 [2024-11-06 13:25:35.611387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.960 [2024-11-06 13:25:35.619869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.960 [2024-11-06 13:25:35.620127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.960 [2024-11-06 13:25:35.620143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.960 [2024-11-06 13:25:35.628615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.960 [2024-11-06 13:25:35.628859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.960 [2024-11-06 13:25:35.628875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.960 [2024-11-06 13:25:35.637318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.960 [2024-11-06 13:25:35.637545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.960 [2024-11-06 13:25:35.637560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.960 [2024-11-06 13:25:35.646074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.960 [2024-11-06 13:25:35.646303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.960 [2024-11-06 13:25:35.646319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.960 [2024-11-06 13:25:35.654793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.960 [2024-11-06 13:25:35.655030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.960 [2024-11-06 13:25:35.655045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.960 [2024-11-06 13:25:35.663509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.663715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.663730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.672230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.672465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.672480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.680955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.681176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.681192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.689667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.689919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.689935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.698431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.698679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.698695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.707150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.707390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.707406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.715898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.716121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.716138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.724626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.724908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.724925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.733357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.733607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.733624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.742111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.742227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.750934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.751157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.751173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.759694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.759949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.759966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.768391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.768631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.768648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.777180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.777443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.777459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.785980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.786192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.786208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.794691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.794964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.794980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.803491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.803743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.803763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.812245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.812510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.812527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.821001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.821245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.821262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.829732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.829972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.829988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.838524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.838731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.838750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.847214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.961 [2024-11-06 13:25:35.847466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.961 [2024-11-06 13:25:35.847483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:53.961 [2024-11-06 13:25:35.855929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:53.962 [2024-11-06 13:25:35.856189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:53.962 [2024-11-06 13:25:35.856207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.864648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.864891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.864907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.873382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.873587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.873603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.882157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.882393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.882418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.890907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.891103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.891118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.899637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.899858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.899875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.908641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.908889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.908905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.917365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.917656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.917672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.926098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.926324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.926340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.934894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.935131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.935147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.943643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.943757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.943772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.952358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.223 [2024-11-06 13:25:35.952555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.223 [2024-11-06 13:25:35.952571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.223 [2024-11-06 13:25:35.961155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:35.961423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:35.961440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:35.969883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:35.970133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:35.970155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:35.978638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:35.978924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:35.978941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:35.987378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:35.987580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:35.987596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:35.996119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:35.996375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:35.996391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.004891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.005130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.005146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.013604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.013827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.013843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.022313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.022625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.022641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.031098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.031324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.031339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.039774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.040021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.040037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.048506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.048762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.048778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.057216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.057504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.057520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.066037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.066268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.066283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.074793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.075029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.075044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.083526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.083776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.083792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.092260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.092491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.092507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.101014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.101258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.101275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.109796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.110041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.110056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.224 [2024-11-06 13:25:36.118650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.224 [2024-11-06 13:25:36.118950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.224 [2024-11-06 13:25:36.118966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.485 [2024-11-06 13:25:36.127384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.485 [2024-11-06 13:25:36.127645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.485 [2024-11-06 13:25:36.127662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.485 [2024-11-06 13:25:36.136110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.485 [2024-11-06 13:25:36.136348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.485 [2024-11-06 13:25:36.136364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.485 [2024-11-06 13:25:36.144836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.485 [2024-11-06 13:25:36.145058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.485 [2024-11-06 13:25:36.145074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.485 [2024-11-06 13:25:36.153574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.485 [2024-11-06 13:25:36.153821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.485 [2024-11-06 13:25:36.153837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.485 [2024-11-06 13:25:36.162342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.485 [2024-11-06 13:25:36.162597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.485 [2024-11-06 13:25:36.162614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.485 [2024-11-06 13:25:36.171112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.485 [2024-11-06 13:25:36.171362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.485 [2024-11-06 13:25:36.171379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.485 [2024-11-06 13:25:36.179851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.485 [2024-11-06 13:25:36.180096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.486 [2024-11-06 13:25:36.180112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.486 [2024-11-06 13:25:36.188618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.486 [2024-11-06 13:25:36.188852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.486 [2024-11-06 13:25:36.188868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.486 [2024-11-06 13:25:36.197338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.486 [2024-11-06 13:25:36.197579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.486 [2024-11-06 13:25:36.197598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.486 [2024-11-06 13:25:36.206061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.486 [2024-11-06 13:25:36.206312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.486 [2024-11-06 13:25:36.206329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.486 [2024-11-06 13:25:36.214848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.486 [2024-11-06 13:25:36.215109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.486 [2024-11-06 13:25:36.215126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.486 [2024-11-06 13:25:36.223614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.486 [2024-11-06 13:25:36.223827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.486 [2024-11-06 13:25:36.223843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.486 [2024-11-06 13:25:36.232403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.486 [2024-11-06 13:25:36.232640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.486 [2024-11-06 13:25:36.232654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.486 [2024-11-06 13:25:36.241139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.486 [2024-11-06 13:25:36.241365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.486 [2024-11-06 13:25:36.241380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.486 [2024-11-06 13:25:36.249901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731750) with pdu=0x2000166f46d0 00:28:54.486 [2024-11-06 13:25:36.250160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.486 [2024-11-06 13:25:36.250177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:54.486 29616.00 IOPS, 115.69 MiB/s 00:28:54.486 Latency(us) 00:28:54.486 [2024-11-06T12:25:36.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.486 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.486 nvme0n1 : 2.00 29609.92 115.66 0.00 0.00 4316.07 2143.57 9775.79 00:28:54.486 [2024-11-06T12:25:36.388Z] =================================================================================================================== 00:28:54.486 [2024-11-06T12:25:36.388Z] Total : 29609.92 115.66 0.00 0.00 4316.07 2143.57 9775.79 00:28:54.486 { 00:28:54.486 "results": [ 00:28:54.486 { 00:28:54.486 "job": "nvme0n1", 00:28:54.486 "core_mask": "0x2", 00:28:54.486 "workload": "randwrite", 00:28:54.486 "status": "finished", 00:28:54.486 "queue_depth": 128, 00:28:54.486 "io_size": 4096, 00:28:54.486 "runtime": 2.004193, 00:28:54.486 "iops": 29609.92279685639, 00:28:54.486 "mibps": 115.66376092522027, 00:28:54.486 "io_failed": 0, 00:28:54.486 "io_timeout": 0, 00:28:54.486 "avg_latency_us": 4316.072457985081, 00:28:54.486 "min_latency_us": 2143.5733333333333, 00:28:54.486 "max_latency_us": 9775.786666666667 00:28:54.486 } 00:28:54.486 ], 00:28:54.486 "core_count": 1 00:28:54.486 } 00:28:54.486 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:54.486 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:54.486 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:54.486 | .driver_specific 00:28:54.486 | .nvme_error 00:28:54.486 | .status_code 00:28:54.486 | .command_transient_transport_error' 00:28:54.486 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1903624 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1903624 ']' 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1903624 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1903624 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1903624' 00:28:54.746 killing process with pid 1903624 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1903624 00:28:54.746 Received shutdown signal, test time was about 2.000000 seconds 00:28:54.746 00:28:54.746 Latency(us) 00:28:54.746 [2024-11-06T12:25:36.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.746 [2024-11-06T12:25:36.648Z] =================================================================================================================== 00:28:54.746 [2024-11-06T12:25:36.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1903624 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1904436 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1904436 /var/tmp/bperf.sock 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1904436 ']' 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:54.746 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:54.747 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:54.747 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:54.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:54.747 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:54.747 13:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.006 [2024-11-06 13:25:36.688013] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:28:55.006 [2024-11-06 13:25:36.688068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904436 ] 00:28:55.006 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:55.006 Zero copy mechanism will not be used. 00:28:55.006 [2024-11-06 13:25:36.772805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.006 [2024-11-06 13:25:36.801899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.949 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:55.949 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:55.949 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:55.949 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:55.949 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:55.949 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.949 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.949 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.949 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.950 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.211 nvme0n1 00:28:56.212 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:56.212 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.212 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.212 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.212 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:56.212 13:25:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:56.212 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:56.212 Zero copy mechanism will not be used. 00:28:56.212 Running I/O for 2 seconds... 00:28:56.212 [2024-11-06 13:25:38.054513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.054728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.054761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.058695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.058913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.058934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.062630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.062827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.062844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.066760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.066960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.066979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.072296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.072343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.072359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.078281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.078481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.078498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.082915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.083106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.083123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.086706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.086899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.086917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.090549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.090595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.090611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.096033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.096331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.096350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.100136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.100350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.100367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.103709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.103903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.103920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.212 [2024-11-06 13:25:38.109808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.212 [2024-11-06 13:25:38.110141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.212 [2024-11-06 13:25:38.110159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.115795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.116002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.116019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.122985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.123176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.123192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.126942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.127134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.127150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.130751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.130941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.130958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.135372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.135566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.135582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.139912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.140101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.140121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.144044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.144235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.144252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.147531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.147721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.147737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.151806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.151993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.152010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.158297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.158499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.158516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.162460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.162646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.162663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.166380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.166570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.166587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.171462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.171653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.171670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.175727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.175913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.175930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.179567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.179753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.179770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.184916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.185095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.185112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.189742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.189929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.189945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.195892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.196174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.196192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.203596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.203779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.203797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.208152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.208329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.208345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.216451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.216690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.216706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.224123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.224283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.224300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.228210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.228373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.228390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.235248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.474 [2024-11-06 13:25:38.235583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.474 [2024-11-06 13:25:38.235601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.474 [2024-11-06 13:25:38.240430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.240474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.240490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.249475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.249543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.249559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.259284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.259356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.259371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.267948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.268062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.268077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.276094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.276191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.276208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.280783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.280875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.280890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.289706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.289777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.289792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.298561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.298654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.298676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.307643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.307698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.307714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.316891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.317184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.317201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.327563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.327730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.327752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.337964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.338244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.338261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.348246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.348544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.348560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.359184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.359501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.359518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.475 [2024-11-06 13:25:38.369160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.475 [2024-11-06 13:25:38.369264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.475 [2024-11-06 13:25:38.369280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.376722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.376797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.376812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.383284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.383349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.383364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.391500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.391546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.391562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.400606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.400673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.400688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.405521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.405590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.405607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.414379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.414656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.414673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.424777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.425025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.425041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.435356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.435569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.435585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.445676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.445965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.445982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.456466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.456714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.456731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.467402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.467580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.467596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.477779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.477989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.478005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.488042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.488335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.488352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.498299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.498518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.498534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.505707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.505769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.505784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.514544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.514660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.514676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.522707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.522767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.522782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.526695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.526740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.736 [2024-11-06 13:25:38.526761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.736 [2024-11-06 13:25:38.532375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.736 [2024-11-06 13:25:38.532450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.532468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.536565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.536674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.536690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.539846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.539900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.539916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.542949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.543006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.543022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.546789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.546849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.546864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.551397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.551460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.551475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.555034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.555117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.555132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.563734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.564001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.564017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.574400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.574641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.574658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.584880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.585013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.585029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.594517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.594822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.594839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.604386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.604677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.604694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.614466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.614532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.614547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.624049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.624336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.624353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.737 [2024-11-06 13:25:38.634014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.737 [2024-11-06 13:25:38.634287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.737 [2024-11-06 13:25:38.634304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.644143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.644399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.644415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.654777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.655035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.655052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.665292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.665565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.665582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.675459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.675762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.675779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.686063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.686322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.686339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.696071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.696314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.696331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.706381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.706617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.706632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.716581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.716834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.716850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.726086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.726328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.726345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.736369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.998 [2024-11-06 13:25:38.736463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.998 [2024-11-06 13:25:38.736479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.998 [2024-11-06 13:25:38.747035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.747268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.747284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.756716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.756864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.756883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.763070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.763122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.763137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.768110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.768180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.768195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.772806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.772867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.772882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.778598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.778656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.778671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.787456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.787641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.787657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.796095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.796374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.796391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.804677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.804923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.804940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.812942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.813196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.813212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.820876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.820949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.820964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.830257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.830568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.830585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.839657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.839938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.839955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.850013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.850074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.850089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.859513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.859761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.859777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.869867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.870117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.870135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.879861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.880125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.880141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.999 [2024-11-06 13:25:38.890230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:56.999 [2024-11-06 13:25:38.890346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.999 [2024-11-06 13:25:38.890361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.260 [2024-11-06 13:25:38.899781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.260 [2024-11-06 13:25:38.900047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.260 [2024-11-06 13:25:38.900064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.260 [2024-11-06 13:25:38.909456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.909741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.909762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:38.919137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.919393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.919409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:38.929375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.929676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.929693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:38.939344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.939616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.939633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:38.948982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.949106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.949123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:38.959671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.959890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.959906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:38.968390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.968462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.968478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:38.977279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.977363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.977379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:38.988247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.988506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.988523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:38.999899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:38.999970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:38.999986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.011135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.011382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.011398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.022470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.022653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.022669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.034042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.034243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.034259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.044894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.045104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.045120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.261 3972.00 IOPS, 496.50 MiB/s [2024-11-06T12:25:39.163Z] [2024-11-06 13:25:39.056595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.056881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.056897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.067321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.067602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.067619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.077593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.077877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.077893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.087984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.088251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.088267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.097870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.098246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.098262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.108314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.108572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.108587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.119381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.119672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.119689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.129221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.129466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.129481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.136940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.137048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.137063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.143851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.144135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.144150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.151901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.151947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.151962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.155093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.155139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.155155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.261 [2024-11-06 13:25:39.157998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.261 [2024-11-06 13:25:39.158050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.261 [2024-11-06 13:25:39.158065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.523 [2024-11-06 13:25:39.160794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.160845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.160859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.163383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.163434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.163449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.166569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.166628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.166643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.169754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.169815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.169830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.172395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.172455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.172471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.175177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.175232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.175248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.177663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.177710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.177726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.180163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.180210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.180231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.182638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.182692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.182708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.185134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.185179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.185194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.187587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.187636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.187651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.190048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.190098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.190113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.192519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.192562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.192578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.195224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.195269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.195284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.199455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.199499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.199514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.204800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.204911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.204927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.211715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.211890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.211909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.218001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.218353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.218370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.221142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.221199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.221214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.228507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.228794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.228810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.234051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.234120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.234135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.236780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.236827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.236843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.239307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.239352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.239368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.241885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.241938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.241953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.244439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.244495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.244511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.247204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.247257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.247273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.524 [2024-11-06 13:25:39.249804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.524 [2024-11-06 13:25:39.249855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.524 [2024-11-06 13:25:39.249870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.253664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.253711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.253727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.257532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.257603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.257619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.264622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.264681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.264697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.267500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.267568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.267583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.273087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.273372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.273389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.278885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.278961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.278977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.282520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.282579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.282595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.285324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.285389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.285404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.288142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.288199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.288215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.290840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.290886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.290902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.293431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.293511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.293526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.296089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.296145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.296160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.299172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.299217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.299232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.301803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.301855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.301871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.304392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.304451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.304466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.306913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.306974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.306992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.309419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.309464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.309480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.311902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.311966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.311982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.315190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.315480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.315497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.323003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.323236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.323254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.327491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.327554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.327569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.330301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.330354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.330370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.333111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.333170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.333186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.336267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.336336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.336352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.339120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.339178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.339194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.341586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.341645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.341660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.525 [2024-11-06 13:25:39.344096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.525 [2024-11-06 13:25:39.344158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.525 [2024-11-06 13:25:39.344173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.346564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.346634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.346650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.349033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.349093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.349108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.351731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.351805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.351821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.358241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.358477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.358494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.367238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.367361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.367376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.374425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.374500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.374516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.381589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.381778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.381794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.389349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.389637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.389654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.397725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.397848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.397865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.404582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.404689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.404704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.413782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.413943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.413959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.526 [2024-11-06 13:25:39.420898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.526 [2024-11-06 13:25:39.420951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.526 [2024-11-06 13:25:39.420966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.428796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.428854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.428869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.433598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.433646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.433662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.437400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.437462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.437481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.445673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.445738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.445759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.453318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.453380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.453395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.458233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.458279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.458295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.460921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.460974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.460989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.463638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.463685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.463700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.466767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.466831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.466847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.469916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.469985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.470001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.472868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.472917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.472932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.475484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.475573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.475588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.478855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.478950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.478965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.481656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.481710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.481725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.485839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.485921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.485936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.488602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.488667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.488682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.491063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.491106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.491122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.789 [2024-11-06 13:25:39.493589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.789 [2024-11-06 13:25:39.493641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.789 [2024-11-06 13:25:39.493657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.499463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.499514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.499530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.505907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.505960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.505976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.512329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.512594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.512611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.519228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.519318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.519333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.527617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.527942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.527958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.535877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.535934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.535950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.540640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.540707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.540723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.543920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.544007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.544022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.547397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.547465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.547480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.552049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.552111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.552127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.559471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.559539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.559560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.564591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.564636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.564652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.567813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.567921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.567936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.571114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.571163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.571179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.574000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.574074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.574090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.576833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.576878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.576894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.579790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.579855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.579871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.582573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.582631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.582647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.585081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.585151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.585167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.587575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.587638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.587654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.593312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.593619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.593636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.597875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.597945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.597960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.600381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.600432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.600447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.602899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.602955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.602971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.605401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.605460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.605475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.608027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.608079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.608094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.790 [2024-11-06 13:25:39.610507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.790 [2024-11-06 13:25:39.610566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.790 [2024-11-06 13:25:39.610581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.612983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.613051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.613067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.615447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.615524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.615540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.621619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.621715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.621732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.625288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.625330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.625346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.627975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.628024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.628041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.630706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.630765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.630780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.633186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.633243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.633258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.635814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.635920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.635936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.639072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.639164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.639180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.641644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.641701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.641719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.644135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.644199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.644215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.646626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.646698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.646713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.649106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.649186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.649202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.651593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.651655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.651670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.654048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.654114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.654130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.656538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.656611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.656627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.659049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.659138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.659154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.662173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.662260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.662276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.671417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.671687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.671704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:57.791 [2024-11-06 13:25:39.681012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:57.791 [2024-11-06 13:25:39.681308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.791 [2024-11-06 13:25:39.681324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.690018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.690107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.690122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.693627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.693673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.693688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.701591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.701654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.701670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.704564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.704614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.704629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.707551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.707614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.707628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.710437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.710508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.710523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.713398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.713468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.713484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.716209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.716257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.716273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.718722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.718780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.718796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.722361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.722480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.722497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.726756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.726844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.726859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.729272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.729322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.729338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.732210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.732324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.732340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.735340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.735414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.735430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.737828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.737882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.737897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.740318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.740369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.740387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.743124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.743178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.743193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.746570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.746687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.746703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.754562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.754684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.754700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.763534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.763591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.763607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.773871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.774102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.774119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.783867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.784099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.784115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.793911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.794125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.794141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.055 [2024-11-06 13:25:39.804554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.055 [2024-11-06 13:25:39.804881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.055 [2024-11-06 13:25:39.804898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.815242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.815551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.815570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.823929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.823976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.823991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.831476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.831526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.831542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.834412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.834702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.834719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.840474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.840758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.840774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.847081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.847180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.847196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.849891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.849940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.849955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.852657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.852728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.852743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.855356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.855414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.855430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.858135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.858199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.858214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.860870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.860926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.860942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.863539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.863595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.863611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.866082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.866132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.866147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.868549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.868604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.868620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.871049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.871105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.871120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.873528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.873578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.873594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.876005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.876078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.876093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.878467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.878526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.878542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.881415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.881493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.881508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.884286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.884339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.884355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.886742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.886810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.886826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.889200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.889256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.889271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.891659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.891733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.891760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.894111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.894159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.894174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.896524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.896594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.896609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.898987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.899032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.899048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.901409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.056 [2024-11-06 13:25:39.901461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.056 [2024-11-06 13:25:39.901479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.056 [2024-11-06 13:25:39.903993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.904065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.904081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.906407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.906460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.906475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.908836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.908892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.908907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.911261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.911312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.911328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.913651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.913699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.913714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.916106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.916163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.916178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.919398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.919503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.919519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.926918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.927127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.927143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.933115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.933248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.933265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.936362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.936495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.936511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.939353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.939416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.939431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.942488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.942608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.942624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.057 [2024-11-06 13:25:39.946741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.057 [2024-11-06 13:25:39.946963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.057 [2024-11-06 13:25:39.946978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.954835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.954906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.954921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.957754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.957807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.957823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.960316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.960376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.960392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.962878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.962976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.962991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.965433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.965510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.965525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.968004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.968072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.968088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.970494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.970561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.970576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.972976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.973039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.973055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.975452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.975524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.975539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.981296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.981347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.981363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.988104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.988366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.319 [2024-11-06 13:25:39.988384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.319 [2024-11-06 13:25:39.994415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.319 [2024-11-06 13:25:39.994669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:39.994685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.002782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.003001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.003021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.009861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.010118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.010135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.016331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.016397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.016413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.019621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.019724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.019741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.022895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.022969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.022984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.025591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.025693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.025709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.028276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.028342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.028357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.031006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.031077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.031092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.033739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.033824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.033839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.036431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.036524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.036539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.039020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.039120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.039136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.041509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.041591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.041607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.044043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.044124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.044139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.046528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.046626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.046642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.320 [2024-11-06 13:25:40.049480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x731a90) with pdu=0x2000166fef90 00:28:58.320 [2024-11-06 13:25:40.050681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.320 [2024-11-06 13:25:40.050700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.320 5626.00 IOPS, 703.25 MiB/s 00:28:58.320 Latency(us) 00:28:58.320 [2024-11-06T12:25:40.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.320 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:58.320 nvme0n1 : 2.00 5629.08 703.64 0.00 0.00 2839.22 846.51 11905.71 00:28:58.320 [2024-11-06T12:25:40.222Z] =================================================================================================================== 00:28:58.320 [2024-11-06T12:25:40.222Z] Total : 5629.08 703.64 0.00 0.00 2839.22 846.51 11905.71 00:28:58.320 { 00:28:58.320 "results": [ 00:28:58.320 { 00:28:58.320 "job": "nvme0n1", 00:28:58.320 "core_mask": "0x2", 00:28:58.320 "workload": "randwrite", 00:28:58.320 "status": "finished", 00:28:58.320 "queue_depth": 16, 00:28:58.320 "io_size": 131072, 00:28:58.320 "runtime": 2.002457, 00:28:58.320 "iops": 5629.084669483539, 00:28:58.320 "mibps": 703.6355836854424, 00:28:58.320 "io_failed": 0, 00:28:58.320 "io_timeout": 0, 00:28:58.320 "avg_latency_us": 2839.2198722498224, 00:28:58.320 "min_latency_us": 846.5066666666667, 00:28:58.320 "max_latency_us": 11905.706666666667 00:28:58.320 } 00:28:58.320 ], 00:28:58.320 "core_count": 1 00:28:58.320 } 00:28:58.320 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:58.320 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:58.320 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:58.320 | .driver_specific 00:28:58.320 | .nvme_error 00:28:58.320 | .status_code 00:28:58.320 | .command_transient_transport_error' 00:28:58.320 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 363 > 0 )) 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1904436 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1904436 ']' 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1904436 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1904436 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1904436' 00:28:58.581 killing process with pid 1904436 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1904436 00:28:58.581 Received shutdown signal, test time was about 2.000000 seconds 00:28:58.581 00:28:58.581 Latency(us) 00:28:58.581 [2024-11-06T12:25:40.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.581 [2024-11-06T12:25:40.483Z] =================================================================================================================== 00:28:58.581 [2024-11-06T12:25:40.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1904436 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1901917 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1901917 ']' 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1901917 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:58.581 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1901917 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1901917' 00:28:58.842 killing process with pid 1901917 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1901917 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1901917 00:28:58.842 00:28:58.842 real 0m16.573s 00:28:58.842 user 0m32.863s 00:28:58.842 sys 0m3.612s 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.842 ************************************ 00:28:58.842 END TEST nvmf_digest_error 00:28:58.842 ************************************ 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:58.842 rmmod nvme_tcp 00:28:58.842 rmmod nvme_fabrics 00:28:58.842 rmmod nvme_keyring 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1901917 ']' 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1901917 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 1901917 ']' 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 1901917 00:28:58.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1901917) - No such process 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 1901917 is not found' 00:28:58.842 Process with pid 1901917 is not found 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.842 13:25:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.387 00:29:01.387 real 0m43.434s 00:29:01.387 user 1m7.923s 00:29:01.387 sys 0m13.409s 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:01.387 ************************************ 00:29:01.387 END TEST nvmf_digest 00:29:01.387 ************************************ 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.387 ************************************ 00:29:01.387 START TEST nvmf_bdevperf 00:29:01.387 ************************************ 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:01.387 * Looking for test storage... 00:29:01.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:01.387 13:25:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:01.387 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:01.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.388 --rc genhtml_branch_coverage=1 00:29:01.388 --rc genhtml_function_coverage=1 00:29:01.388 --rc genhtml_legend=1 00:29:01.388 --rc geninfo_all_blocks=1 00:29:01.388 --rc geninfo_unexecuted_blocks=1 00:29:01.388 00:29:01.388 ' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:01.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.388 --rc genhtml_branch_coverage=1 00:29:01.388 --rc genhtml_function_coverage=1 00:29:01.388 --rc genhtml_legend=1 00:29:01.388 --rc geninfo_all_blocks=1 00:29:01.388 --rc geninfo_unexecuted_blocks=1 00:29:01.388 00:29:01.388 ' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:01.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.388 --rc genhtml_branch_coverage=1 00:29:01.388 --rc genhtml_function_coverage=1 00:29:01.388 --rc genhtml_legend=1 00:29:01.388 --rc geninfo_all_blocks=1 00:29:01.388 --rc geninfo_unexecuted_blocks=1 00:29:01.388 00:29:01.388 ' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:01.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.388 --rc genhtml_branch_coverage=1 00:29:01.388 --rc genhtml_function_coverage=1 00:29:01.388 --rc genhtml_legend=1 00:29:01.388 --rc geninfo_all_blocks=1 00:29:01.388 --rc geninfo_unexecuted_blocks=1 00:29:01.388 00:29:01.388 ' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:01.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:01.388 13:25:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:09.532 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:09.532 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:09.532 Found net devices under 0000:31:00.0: cvl_0_0 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:09.532 Found net devices under 0000:31:00.1: cvl_0_1 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:09.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:29:09.532 00:29:09.532 --- 10.0.0.2 ping statistics --- 00:29:09.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.532 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:29:09.532 00:29:09.532 --- 10.0.0.1 ping statistics --- 00:29:09.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.532 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.532 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1909389 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1909389 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1909389 ']' 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:09.533 13:25:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.533 [2024-11-06 13:25:50.820921] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:29:09.533 [2024-11-06 13:25:50.820985] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.533 [2024-11-06 13:25:50.923336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:09.533 [2024-11-06 13:25:50.976446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.533 [2024-11-06 13:25:50.976497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.533 [2024-11-06 13:25:50.976506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.533 [2024-11-06 13:25:50.976514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.533 [2024-11-06 13:25:50.976520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.533 [2024-11-06 13:25:50.978406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.533 [2024-11-06 13:25:50.978566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.533 [2024-11-06 13:25:50.978567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.794 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:09.794 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:09.794 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.794 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:09.794 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.794 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.794 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:09.794 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.794 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.794 [2024-11-06 13:25:51.693443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.055 Malloc0 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:10.055 [2024-11-06 13:25:51.764238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:10.055 { 00:29:10.055 "params": { 00:29:10.055 "name": "Nvme$subsystem", 00:29:10.055 "trtype": "$TEST_TRANSPORT", 00:29:10.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.055 "adrfam": "ipv4", 00:29:10.055 "trsvcid": "$NVMF_PORT", 00:29:10.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.055 "hdgst": ${hdgst:-false}, 00:29:10.055 "ddgst": ${ddgst:-false} 00:29:10.055 }, 00:29:10.055 "method": "bdev_nvme_attach_controller" 00:29:10.055 } 00:29:10.055 EOF 00:29:10.055 )") 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:10.055 13:25:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:10.055 "params": { 00:29:10.055 "name": "Nvme1", 00:29:10.055 "trtype": "tcp", 00:29:10.055 "traddr": "10.0.0.2", 00:29:10.055 "adrfam": "ipv4", 00:29:10.055 "trsvcid": "4420", 00:29:10.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.055 "hdgst": false, 00:29:10.055 "ddgst": false 00:29:10.055 }, 00:29:10.055 "method": "bdev_nvme_attach_controller" 00:29:10.055 }' 00:29:10.055 [2024-11-06 13:25:51.822701] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:29:10.055 [2024-11-06 13:25:51.822768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909526 ] 00:29:10.056 [2024-11-06 13:25:51.917178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.317 [2024-11-06 13:25:51.970343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.578 Running I/O for 1 seconds... 00:29:11.519 8732.00 IOPS, 34.11 MiB/s 00:29:11.519 Latency(us) 00:29:11.519 [2024-11-06T12:25:53.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.519 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:11.519 Verification LBA range: start 0x0 length 0x4000 00:29:11.519 Nvme1n1 : 1.01 8803.20 34.39 0.00 0.00 14471.25 2990.08 14964.05 00:29:11.519 [2024-11-06T12:25:53.421Z] =================================================================================================================== 00:29:11.519 [2024-11-06T12:25:53.421Z] Total : 8803.20 34.39 0.00 0.00 14471.25 2990.08 14964.05 00:29:11.779 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1909860 00:29:11.779 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:11.780 { 00:29:11.780 "params": { 00:29:11.780 "name": "Nvme$subsystem", 00:29:11.780 "trtype": "$TEST_TRANSPORT", 00:29:11.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.780 "adrfam": "ipv4", 00:29:11.780 "trsvcid": "$NVMF_PORT", 00:29:11.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.780 "hdgst": ${hdgst:-false}, 00:29:11.780 "ddgst": ${ddgst:-false} 00:29:11.780 }, 00:29:11.780 "method": "bdev_nvme_attach_controller" 00:29:11.780 } 00:29:11.780 EOF 00:29:11.780 )") 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:11.780 13:25:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:11.780 "params": { 00:29:11.780 "name": "Nvme1", 00:29:11.780 "trtype": "tcp", 00:29:11.780 "traddr": "10.0.0.2", 00:29:11.780 "adrfam": "ipv4", 00:29:11.780 "trsvcid": "4420", 00:29:11.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.780 "hdgst": false, 00:29:11.780 "ddgst": false 00:29:11.780 }, 00:29:11.780 "method": "bdev_nvme_attach_controller" 00:29:11.780 }' 00:29:11.780 [2024-11-06 13:25:53.490945] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:29:11.780 [2024-11-06 13:25:53.490999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909860 ] 00:29:11.780 [2024-11-06 13:25:53.581751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.780 [2024-11-06 13:25:53.616600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.040 Running I/O for 15 seconds... 00:29:13.994 11027.00 IOPS, 43.07 MiB/s [2024-11-06T12:25:56.469Z] 11047.00 IOPS, 43.15 MiB/s [2024-11-06T12:25:56.469Z] 13:25:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1909389 00:29:14.567 13:25:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:14.567 [2024-11-06 13:25:56.442612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.442655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.442685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.442707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.567 [2024-11-06 13:25:56.442727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.567 [2024-11-06 13:25:56.442842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.567 [2024-11-06 13:25:56.442862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.567 [2024-11-06 13:25:56.442892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.567 [2024-11-06 13:25:56.442913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.567 [2024-11-06 13:25:56.442933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.567 [2024-11-06 13:25:56.442956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.442977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.442987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.442995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.567 [2024-11-06 13:25:56.443315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.567 [2024-11-06 13:25:56.443323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.568 [2024-11-06 13:25:56.443915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.443982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.568 [2024-11-06 13:25:56.443993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.568 [2024-11-06 13:25:56.444001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.569 [2024-11-06 13:25:56.444672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.569 [2024-11-06 13:25:56.444680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.570 [2024-11-06 13:25:56.444952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.444961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bb550 is same with the state(6) to be set 00:29:14.570 [2024-11-06 13:25:56.444970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:14.570 [2024-11-06 13:25:56.444976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:14.570 [2024-11-06 13:25:56.444983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95256 len:8 PRP1 0x0 PRP2 0x0 00:29:14.570 [2024-11-06 13:25:56.444991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.570 [2024-11-06 13:25:56.448593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.570 [2024-11-06 13:25:56.448646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.570 [2024-11-06 13:25:56.449433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.570 [2024-11-06 13:25:56.449451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.570 [2024-11-06 13:25:56.449459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.570 [2024-11-06 13:25:56.449676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.570 [2024-11-06 13:25:56.449898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.570 [2024-11-06 13:25:56.449907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.570 [2024-11-06 13:25:56.449920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.570 [2024-11-06 13:25:56.449928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.570 [2024-11-06 13:25:56.462676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.570 [2024-11-06 13:25:56.463143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.570 [2024-11-06 13:25:56.463161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.570 [2024-11-06 13:25:56.463170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.570 [2024-11-06 13:25:56.463385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.570 [2024-11-06 13:25:56.463601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.570 [2024-11-06 13:25:56.463610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.570 [2024-11-06 13:25:56.463618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.570 [2024-11-06 13:25:56.463626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.834 [2024-11-06 13:25:56.476602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.834 [2024-11-06 13:25:56.477265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.834 [2024-11-06 13:25:56.477305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.834 [2024-11-06 13:25:56.477316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.834 [2024-11-06 13:25:56.477557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.834 [2024-11-06 13:25:56.477784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.834 [2024-11-06 13:25:56.477794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.834 [2024-11-06 13:25:56.477803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.834 [2024-11-06 13:25:56.477811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.834 [2024-11-06 13:25:56.490370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.834 [2024-11-06 13:25:56.491061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.834 [2024-11-06 13:25:56.491102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.834 [2024-11-06 13:25:56.491113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.834 [2024-11-06 13:25:56.491352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.834 [2024-11-06 13:25:56.491572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.834 [2024-11-06 13:25:56.491581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.834 [2024-11-06 13:25:56.491589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.834 [2024-11-06 13:25:56.491598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.834 [2024-11-06 13:25:56.504183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.834 [2024-11-06 13:25:56.504880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.834 [2024-11-06 13:25:56.504922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.834 [2024-11-06 13:25:56.504933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.834 [2024-11-06 13:25:56.505172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.834 [2024-11-06 13:25:56.505393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.834 [2024-11-06 13:25:56.505402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.834 [2024-11-06 13:25:56.505410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.834 [2024-11-06 13:25:56.505418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.834 [2024-11-06 13:25:56.517980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.834 [2024-11-06 13:25:56.518648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.834 [2024-11-06 13:25:56.518690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.834 [2024-11-06 13:25:56.518702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.834 [2024-11-06 13:25:56.518950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.834 [2024-11-06 13:25:56.519171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.834 [2024-11-06 13:25:56.519180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.834 [2024-11-06 13:25:56.519188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.834 [2024-11-06 13:25:56.519197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.834 [2024-11-06 13:25:56.531738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.834 [2024-11-06 13:25:56.532312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.834 [2024-11-06 13:25:56.532334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.834 [2024-11-06 13:25:56.532342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.834 [2024-11-06 13:25:56.532559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.834 [2024-11-06 13:25:56.532783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.834 [2024-11-06 13:25:56.532792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.834 [2024-11-06 13:25:56.532799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.834 [2024-11-06 13:25:56.532806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.834 [2024-11-06 13:25:56.545547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.834 [2024-11-06 13:25:56.546093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.834 [2024-11-06 13:25:56.546113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.834 [2024-11-06 13:25:56.546125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.834 [2024-11-06 13:25:56.546341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.834 [2024-11-06 13:25:56.546557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.834 [2024-11-06 13:25:56.546565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.834 [2024-11-06 13:25:56.546572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.834 [2024-11-06 13:25:56.546579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.834 [2024-11-06 13:25:56.559324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.834 [2024-11-06 13:25:56.559857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.834 [2024-11-06 13:25:56.559877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.834 [2024-11-06 13:25:56.559885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.834 [2024-11-06 13:25:56.560101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.834 [2024-11-06 13:25:56.560316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.560325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.560333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.560340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.573082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.573717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.573774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.573786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.574029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.574252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.574261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.574270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.574278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.586839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.587489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.587541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.587553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.587810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.588033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.588053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.588061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.588069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.600653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.601380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.601434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.601446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.601692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.601929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.601941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.601949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.601958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.614530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.615215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.615277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.615289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.615542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.615783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.615793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.615801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.615811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.628384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.629133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.629195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.629207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.629460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.629684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.629695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.629704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.629720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.642222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.642990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.643052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.643064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.643317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.643541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.643550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.643559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.643568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.656155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.656772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.656832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.656846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.657099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.657322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.657333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.657341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.657351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.669926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.670602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.670664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.670676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.670945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.671170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.671179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.671188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.671196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.683765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.684407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.684435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.684444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.684663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.684892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.835 [2024-11-06 13:25:56.684902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.835 [2024-11-06 13:25:56.684910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.835 [2024-11-06 13:25:56.684917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.835 [2024-11-06 13:25:56.697714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.835 [2024-11-06 13:25:56.698414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.835 [2024-11-06 13:25:56.698477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.835 [2024-11-06 13:25:56.698490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.835 [2024-11-06 13:25:56.698743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.835 [2024-11-06 13:25:56.698984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.836 [2024-11-06 13:25:56.698996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.836 [2024-11-06 13:25:56.699004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.836 [2024-11-06 13:25:56.699013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.836 [2024-11-06 13:25:56.711628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.836 [2024-11-06 13:25:56.712376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.836 [2024-11-06 13:25:56.712438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.836 [2024-11-06 13:25:56.712450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.836 [2024-11-06 13:25:56.712703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.836 [2024-11-06 13:25:56.712940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.836 [2024-11-06 13:25:56.712951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.836 [2024-11-06 13:25:56.712959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.836 [2024-11-06 13:25:56.712969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:14.836 [2024-11-06 13:25:56.725553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:14.836 [2024-11-06 13:25:56.726288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.836 [2024-11-06 13:25:56.726350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:14.836 [2024-11-06 13:25:56.726363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:14.836 [2024-11-06 13:25:56.726622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:14.836 [2024-11-06 13:25:56.726857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:14.836 [2024-11-06 13:25:56.726867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:14.836 [2024-11-06 13:25:56.726876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:14.836 [2024-11-06 13:25:56.726885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.099 [2024-11-06 13:25:56.739467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.099 [2024-11-06 13:25:56.740148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.099 [2024-11-06 13:25:56.740210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.099 [2024-11-06 13:25:56.740223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.099 [2024-11-06 13:25:56.740475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.099 [2024-11-06 13:25:56.740700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.099 [2024-11-06 13:25:56.740710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.099 [2024-11-06 13:25:56.740719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.099 [2024-11-06 13:25:56.740728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.099 [2024-11-06 13:25:56.753322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.099 [2024-11-06 13:25:56.753832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.099 [2024-11-06 13:25:56.753881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.099 [2024-11-06 13:25:56.753891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.099 [2024-11-06 13:25:56.754130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.099 [2024-11-06 13:25:56.754351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.099 [2024-11-06 13:25:56.754360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.099 [2024-11-06 13:25:56.754368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.099 [2024-11-06 13:25:56.754376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.099 [2024-11-06 13:25:56.767274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.099 [2024-11-06 13:25:56.767961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.099 [2024-11-06 13:25:56.768023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.099 [2024-11-06 13:25:56.768035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.099 [2024-11-06 13:25:56.768288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.099 [2024-11-06 13:25:56.768512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.768529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.768538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.768547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.100 [2024-11-06 13:25:56.781141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.100 [2024-11-06 13:25:56.781832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.100 [2024-11-06 13:25:56.781894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.100 [2024-11-06 13:25:56.781906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.100 [2024-11-06 13:25:56.782159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.100 [2024-11-06 13:25:56.782382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.782393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.782401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.782410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.100 [2024-11-06 13:25:56.795021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.100 [2024-11-06 13:25:56.795707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.100 [2024-11-06 13:25:56.795779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.100 [2024-11-06 13:25:56.795793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.100 [2024-11-06 13:25:56.796047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.100 [2024-11-06 13:25:56.796271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.796282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.796290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.796299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.100 [2024-11-06 13:25:56.808875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.100 [2024-11-06 13:25:56.809558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.100 [2024-11-06 13:25:56.809620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.100 [2024-11-06 13:25:56.809633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.100 [2024-11-06 13:25:56.809901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.100 [2024-11-06 13:25:56.810126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.810137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.810146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.810162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.100 [2024-11-06 13:25:56.822746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.100 [2024-11-06 13:25:56.823467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.100 [2024-11-06 13:25:56.823530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.100 [2024-11-06 13:25:56.823543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.100 [2024-11-06 13:25:56.823811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.100 [2024-11-06 13:25:56.824036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.824046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.824055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.824064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.100 [2024-11-06 13:25:56.836632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.100 [2024-11-06 13:25:56.837381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.100 [2024-11-06 13:25:56.837442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.100 [2024-11-06 13:25:56.837454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.100 [2024-11-06 13:25:56.837707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.100 [2024-11-06 13:25:56.837947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.837958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.837967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.837976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.100 [2024-11-06 13:25:56.850574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.100 [2024-11-06 13:25:56.851298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.100 [2024-11-06 13:25:56.851360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.100 [2024-11-06 13:25:56.851373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.100 [2024-11-06 13:25:56.851625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.100 [2024-11-06 13:25:56.851865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.851876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.851884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.851894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.100 [2024-11-06 13:25:56.864484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.100 [2024-11-06 13:25:56.865222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.100 [2024-11-06 13:25:56.865292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.100 [2024-11-06 13:25:56.865305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.100 [2024-11-06 13:25:56.865557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.100 [2024-11-06 13:25:56.865795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.865805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.865813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.865823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.100 [2024-11-06 13:25:56.878406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.100 [2024-11-06 13:25:56.879043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.100 [2024-11-06 13:25:56.879074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.100 [2024-11-06 13:25:56.879083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.100 [2024-11-06 13:25:56.879302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.100 [2024-11-06 13:25:56.879520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.879530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.879538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.879546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.100 9400.33 IOPS, 36.72 MiB/s [2024-11-06T12:25:57.002Z] [2024-11-06 13:25:56.893987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.100 [2024-11-06 13:25:56.894561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.100 [2024-11-06 13:25:56.894587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.100 [2024-11-06 13:25:56.894595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.100 [2024-11-06 13:25:56.894835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.100 [2024-11-06 13:25:56.895055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.100 [2024-11-06 13:25:56.895064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.100 [2024-11-06 13:25:56.895072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.100 [2024-11-06 13:25:56.895080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.101 [2024-11-06 13:25:56.907929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.101 [2024-11-06 13:25:56.908562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.101 [2024-11-06 13:25:56.908588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.101 [2024-11-06 13:25:56.908596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.101 [2024-11-06 13:25:56.908831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.101 [2024-11-06 13:25:56.909052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.101 [2024-11-06 13:25:56.909061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.101 [2024-11-06 13:25:56.909069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.101 [2024-11-06 13:25:56.909077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.101 [2024-11-06 13:25:56.921837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.101 [2024-11-06 13:25:56.922402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.101 [2024-11-06 13:25:56.922428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.101 [2024-11-06 13:25:56.922437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.101 [2024-11-06 13:25:56.922655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.101 [2024-11-06 13:25:56.922883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.101 [2024-11-06 13:25:56.922893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.101 [2024-11-06 13:25:56.922901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.101 [2024-11-06 13:25:56.922910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.101 [2024-11-06 13:25:56.935675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.101 [2024-11-06 13:25:56.936284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.101 [2024-11-06 13:25:56.936308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.101 [2024-11-06 13:25:56.936316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.101 [2024-11-06 13:25:56.936535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.101 [2024-11-06 13:25:56.936760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.101 [2024-11-06 13:25:56.936769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.101 [2024-11-06 13:25:56.936777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.101 [2024-11-06 13:25:56.936785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.101 [2024-11-06 13:25:56.949548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.101 [2024-11-06 13:25:56.950195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.101 [2024-11-06 13:25:56.950219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.101 [2024-11-06 13:25:56.950228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.101 [2024-11-06 13:25:56.950445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.101 [2024-11-06 13:25:56.950662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.101 [2024-11-06 13:25:56.950677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.101 [2024-11-06 13:25:56.950686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.101 [2024-11-06 13:25:56.950693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.101 [2024-11-06 13:25:56.963493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.101 [2024-11-06 13:25:56.963987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.101 [2024-11-06 13:25:56.964011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.101 [2024-11-06 13:25:56.964020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.101 [2024-11-06 13:25:56.964237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.101 [2024-11-06 13:25:56.964454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.101 [2024-11-06 13:25:56.964471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.101 [2024-11-06 13:25:56.964479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.101 [2024-11-06 13:25:56.964487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.101 [2024-11-06 13:25:56.977257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.101 [2024-11-06 13:25:56.977812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.101 [2024-11-06 13:25:56.977836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.101 [2024-11-06 13:25:56.977844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.101 [2024-11-06 13:25:56.978061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.101 [2024-11-06 13:25:56.978279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.101 [2024-11-06 13:25:56.978287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.101 [2024-11-06 13:25:56.978295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.101 [2024-11-06 13:25:56.978302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.101 [2024-11-06 13:25:56.991060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.101 [2024-11-06 13:25:56.991792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.101 [2024-11-06 13:25:56.991855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.101 [2024-11-06 13:25:56.991868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.101 [2024-11-06 13:25:56.992121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.101 [2024-11-06 13:25:56.992345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.101 [2024-11-06 13:25:56.992354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.101 [2024-11-06 13:25:56.992363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.101 [2024-11-06 13:25:56.992380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.365 [2024-11-06 13:25:57.004848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.365 [2024-11-06 13:25:57.005561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-06 13:25:57.005623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.365 [2024-11-06 13:25:57.005636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.365 [2024-11-06 13:25:57.005904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.365 [2024-11-06 13:25:57.006129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.365 [2024-11-06 13:25:57.006140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.365 [2024-11-06 13:25:57.006148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.365 [2024-11-06 13:25:57.006157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.365 [2024-11-06 13:25:57.018726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.365 [2024-11-06 13:25:57.019445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.365 [2024-11-06 13:25:57.019507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.365 [2024-11-06 13:25:57.019520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.365 [2024-11-06 13:25:57.019791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.365 [2024-11-06 13:25:57.020016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.020027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.020035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.020044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.032614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.033296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.033358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.033370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.033623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.033864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.033874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.033884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.033893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.046464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.047149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.047219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.047232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.047484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.047708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.047718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.047726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.047735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.060325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.061029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.061091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.061104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.061357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.061581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.061591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.061599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.061608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.074210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.074868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.074932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.074945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.075198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.075423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.075433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.075442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.075452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.088049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.088671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.088699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.088708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.088947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.089168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.089176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.089184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.089192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.101997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.102571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.102598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.102606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.102831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.103050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.103060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.103068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.103076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.115857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.116416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.116440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.116449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.116666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.116891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.116901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.116909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.116917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.129701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.130336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.130398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.130410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.130663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.130904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.130922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.130930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.130939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.143517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.144116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.144146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.144155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.144375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.144593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.366 [2024-11-06 13:25:57.144602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.366 [2024-11-06 13:25:57.144610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.366 [2024-11-06 13:25:57.144618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.366 [2024-11-06 13:25:57.157404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.366 [2024-11-06 13:25:57.157980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.366 [2024-11-06 13:25:57.158043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.366 [2024-11-06 13:25:57.158056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.366 [2024-11-06 13:25:57.158308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.366 [2024-11-06 13:25:57.158533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.367 [2024-11-06 13:25:57.158543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.367 [2024-11-06 13:25:57.158552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.367 [2024-11-06 13:25:57.158561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.367 [2024-11-06 13:25:57.171163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.367 [2024-11-06 13:25:57.171884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-06 13:25:57.171946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.367 [2024-11-06 13:25:57.171958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.367 [2024-11-06 13:25:57.172210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.367 [2024-11-06 13:25:57.172434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.367 [2024-11-06 13:25:57.172443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.367 [2024-11-06 13:25:57.172452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.367 [2024-11-06 13:25:57.172461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.367 [2024-11-06 13:25:57.185066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.367 [2024-11-06 13:25:57.185806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-06 13:25:57.185868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.367 [2024-11-06 13:25:57.185881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.367 [2024-11-06 13:25:57.186134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.367 [2024-11-06 13:25:57.186358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.367 [2024-11-06 13:25:57.186368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.367 [2024-11-06 13:25:57.186377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.367 [2024-11-06 13:25:57.186386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.367 [2024-11-06 13:25:57.199012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.367 [2024-11-06 13:25:57.199760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-06 13:25:57.199822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.367 [2024-11-06 13:25:57.199836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.367 [2024-11-06 13:25:57.200089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.367 [2024-11-06 13:25:57.200313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.367 [2024-11-06 13:25:57.200324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.367 [2024-11-06 13:25:57.200333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.367 [2024-11-06 13:25:57.200343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.367 [2024-11-06 13:25:57.212945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.367 [2024-11-06 13:25:57.213623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-06 13:25:57.213685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.367 [2024-11-06 13:25:57.213698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.367 [2024-11-06 13:25:57.213963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.367 [2024-11-06 13:25:57.214188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.367 [2024-11-06 13:25:57.214198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.367 [2024-11-06 13:25:57.214206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.367 [2024-11-06 13:25:57.214215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.367 [2024-11-06 13:25:57.226800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.367 [2024-11-06 13:25:57.227519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-06 13:25:57.227586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.367 [2024-11-06 13:25:57.227599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.367 [2024-11-06 13:25:57.227867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.367 [2024-11-06 13:25:57.228092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.367 [2024-11-06 13:25:57.228101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.367 [2024-11-06 13:25:57.228110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.367 [2024-11-06 13:25:57.228119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.367 [2024-11-06 13:25:57.240711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.367 [2024-11-06 13:25:57.241435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-06 13:25:57.241496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.367 [2024-11-06 13:25:57.241509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.367 [2024-11-06 13:25:57.241775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.367 [2024-11-06 13:25:57.241999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.367 [2024-11-06 13:25:57.242009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.367 [2024-11-06 13:25:57.242017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.367 [2024-11-06 13:25:57.242027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.367 [2024-11-06 13:25:57.254607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.367 [2024-11-06 13:25:57.255233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.367 [2024-11-06 13:25:57.255263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.367 [2024-11-06 13:25:57.255272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.367 [2024-11-06 13:25:57.255491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.367 [2024-11-06 13:25:57.255710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.367 [2024-11-06 13:25:57.255720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.367 [2024-11-06 13:25:57.255728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.367 [2024-11-06 13:25:57.255736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.268521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.269116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.269141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.269150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.269383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.630 [2024-11-06 13:25:57.269601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.630 [2024-11-06 13:25:57.269610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.630 [2024-11-06 13:25:57.269618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.630 [2024-11-06 13:25:57.269626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.282268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.282867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.282912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.282922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.283160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.630 [2024-11-06 13:25:57.283381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.630 [2024-11-06 13:25:57.283391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.630 [2024-11-06 13:25:57.283399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.630 [2024-11-06 13:25:57.283406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.296216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.296860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.296922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.296935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.297187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.630 [2024-11-06 13:25:57.297411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.630 [2024-11-06 13:25:57.297420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.630 [2024-11-06 13:25:57.297429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.630 [2024-11-06 13:25:57.297438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.310042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.310755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.310818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.310831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.311083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.630 [2024-11-06 13:25:57.311308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.630 [2024-11-06 13:25:57.311318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.630 [2024-11-06 13:25:57.311334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.630 [2024-11-06 13:25:57.311345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.323950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.324636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.324698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.324711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.324978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.630 [2024-11-06 13:25:57.325203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.630 [2024-11-06 13:25:57.325213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.630 [2024-11-06 13:25:57.325221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.630 [2024-11-06 13:25:57.325230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.337816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.338412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.338476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.338489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.338741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.630 [2024-11-06 13:25:57.338978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.630 [2024-11-06 13:25:57.338990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.630 [2024-11-06 13:25:57.338998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.630 [2024-11-06 13:25:57.339007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.351594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.352245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.352275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.352283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.352502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.630 [2024-11-06 13:25:57.352721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.630 [2024-11-06 13:25:57.352730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.630 [2024-11-06 13:25:57.352738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.630 [2024-11-06 13:25:57.352755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.365548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.366264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.366326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.366338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.366591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.630 [2024-11-06 13:25:57.366827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.630 [2024-11-06 13:25:57.366838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.630 [2024-11-06 13:25:57.366847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.630 [2024-11-06 13:25:57.366857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.378211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.378762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.378788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.378794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.378948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.630 [2024-11-06 13:25:57.379102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.630 [2024-11-06 13:25:57.379109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.630 [2024-11-06 13:25:57.379115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.630 [2024-11-06 13:25:57.379121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.630 [2024-11-06 13:25:57.390854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.630 [2024-11-06 13:25:57.391384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.630 [2024-11-06 13:25:57.391403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.630 [2024-11-06 13:25:57.391409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.630 [2024-11-06 13:25:57.391559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.391709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.391715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.391720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.391726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.403480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.403981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.404005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.404011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.631 [2024-11-06 13:25:57.404161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.404310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.404317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.404322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.404328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.416208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.416705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.416755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.416765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.631 [2024-11-06 13:25:57.416941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.417094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.417101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.417107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.417113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.428844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.429236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.429255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.429261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.631 [2024-11-06 13:25:57.429411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.429561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.429566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.429572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.429578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.441477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.442037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.442075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.442085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.631 [2024-11-06 13:25:57.442256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.442414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.442421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.442427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.442434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.454164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.454877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.454914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.454923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.631 [2024-11-06 13:25:57.455094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.455247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.455253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.455259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.455265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.466855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.467339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.467356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.467362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.631 [2024-11-06 13:25:57.467511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.467660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.467666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.467671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.467676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.479466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.479871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.479888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.479894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.631 [2024-11-06 13:25:57.480043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.480192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.480199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.480208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.480213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.492070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.492589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.492621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.492630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.631 [2024-11-06 13:25:57.492801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.492954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.492960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.492966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.492972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.504702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.505297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.505328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.505337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.631 [2024-11-06 13:25:57.505502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.631 [2024-11-06 13:25:57.505654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.631 [2024-11-06 13:25:57.505661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.631 [2024-11-06 13:25:57.505666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.631 [2024-11-06 13:25:57.505672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.631 [2024-11-06 13:25:57.517389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.631 [2024-11-06 13:25:57.517872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.631 [2024-11-06 13:25:57.517904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.631 [2024-11-06 13:25:57.517912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.632 [2024-11-06 13:25:57.518079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.632 [2024-11-06 13:25:57.518231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.632 [2024-11-06 13:25:57.518237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.632 [2024-11-06 13:25:57.518243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.632 [2024-11-06 13:25:57.518248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.894 [2024-11-06 13:25:57.529964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.894 [2024-11-06 13:25:57.530549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.894 [2024-11-06 13:25:57.530580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.894 [2024-11-06 13:25:57.530589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.894 [2024-11-06 13:25:57.530759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.894 [2024-11-06 13:25:57.530911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.894 [2024-11-06 13:25:57.530918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.894 [2024-11-06 13:25:57.530923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.894 [2024-11-06 13:25:57.530930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.894 [2024-11-06 13:25:57.542638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.894 [2024-11-06 13:25:57.543202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.894 [2024-11-06 13:25:57.543232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.894 [2024-11-06 13:25:57.543241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.894 [2024-11-06 13:25:57.543405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.894 [2024-11-06 13:25:57.543556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.894 [2024-11-06 13:25:57.543563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.894 [2024-11-06 13:25:57.543569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.894 [2024-11-06 13:25:57.543575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.894 [2024-11-06 13:25:57.555281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.894 [2024-11-06 13:25:57.555816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.894 [2024-11-06 13:25:57.555831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.894 [2024-11-06 13:25:57.555836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.894 [2024-11-06 13:25:57.555986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.894 [2024-11-06 13:25:57.556134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.894 [2024-11-06 13:25:57.556139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.894 [2024-11-06 13:25:57.556145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.894 [2024-11-06 13:25:57.556150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.894 [2024-11-06 13:25:57.567853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.894 [2024-11-06 13:25:57.568342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.894 [2024-11-06 13:25:57.568355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.894 [2024-11-06 13:25:57.568364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.894 [2024-11-06 13:25:57.568512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.894 [2024-11-06 13:25:57.568660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.894 [2024-11-06 13:25:57.568666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.894 [2024-11-06 13:25:57.568671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.894 [2024-11-06 13:25:57.568676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.894 [2024-11-06 13:25:57.580514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.894 [2024-11-06 13:25:57.581082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.894 [2024-11-06 13:25:57.581113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.894 [2024-11-06 13:25:57.581121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.894 [2024-11-06 13:25:57.581287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.894 [2024-11-06 13:25:57.581439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.894 [2024-11-06 13:25:57.581445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.894 [2024-11-06 13:25:57.581451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.894 [2024-11-06 13:25:57.581457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.894 [2024-11-06 13:25:57.593167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.894 [2024-11-06 13:25:57.593624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.894 [2024-11-06 13:25:57.593639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.894 [2024-11-06 13:25:57.593645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.894 [2024-11-06 13:25:57.593798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.894 [2024-11-06 13:25:57.593947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.894 [2024-11-06 13:25:57.593953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.894 [2024-11-06 13:25:57.593958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.894 [2024-11-06 13:25:57.593963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.894 [2024-11-06 13:25:57.605824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.894 [2024-11-06 13:25:57.606400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.894 [2024-11-06 13:25:57.606431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.894 [2024-11-06 13:25:57.606440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.894 [2024-11-06 13:25:57.606604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.894 [2024-11-06 13:25:57.606767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.894 [2024-11-06 13:25:57.606775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.894 [2024-11-06 13:25:57.606781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.894 [2024-11-06 13:25:57.606787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.894 [2024-11-06 13:25:57.618492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.894 [2024-11-06 13:25:57.618955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.894 [2024-11-06 13:25:57.618970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.894 [2024-11-06 13:25:57.618976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.894 [2024-11-06 13:25:57.619125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.894 [2024-11-06 13:25:57.619273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.894 [2024-11-06 13:25:57.619279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.894 [2024-11-06 13:25:57.619284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.894 [2024-11-06 13:25:57.619289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.894 [2024-11-06 13:25:57.631127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.894 [2024-11-06 13:25:57.631647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.894 [2024-11-06 13:25:57.631660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.894 [2024-11-06 13:25:57.631666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.631818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.895 [2024-11-06 13:25:57.631966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.895 [2024-11-06 13:25:57.631972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.895 [2024-11-06 13:25:57.631977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.895 [2024-11-06 13:25:57.631982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.895 [2024-11-06 13:25:57.643818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.895 [2024-11-06 13:25:57.644269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.895 [2024-11-06 13:25:57.644281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.895 [2024-11-06 13:25:57.644287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.644435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.895 [2024-11-06 13:25:57.644583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.895 [2024-11-06 13:25:57.644589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.895 [2024-11-06 13:25:57.644598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.895 [2024-11-06 13:25:57.644602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.895 [2024-11-06 13:25:57.656445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.895 [2024-11-06 13:25:57.656901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.895 [2024-11-06 13:25:57.656913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.895 [2024-11-06 13:25:57.656919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.657067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.895 [2024-11-06 13:25:57.657215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.895 [2024-11-06 13:25:57.657220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.895 [2024-11-06 13:25:57.657225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.895 [2024-11-06 13:25:57.657230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.895 [2024-11-06 13:25:57.669065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.895 [2024-11-06 13:25:57.669395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.895 [2024-11-06 13:25:57.669407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.895 [2024-11-06 13:25:57.669413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.669560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.895 [2024-11-06 13:25:57.669709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.895 [2024-11-06 13:25:57.669714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.895 [2024-11-06 13:25:57.669719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.895 [2024-11-06 13:25:57.669724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.895 [2024-11-06 13:25:57.681706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.895 [2024-11-06 13:25:57.682273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.895 [2024-11-06 13:25:57.682303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.895 [2024-11-06 13:25:57.682312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.682476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.895 [2024-11-06 13:25:57.682628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.895 [2024-11-06 13:25:57.682634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.895 [2024-11-06 13:25:57.682639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.895 [2024-11-06 13:25:57.682645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.895 [2024-11-06 13:25:57.694359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.895 [2024-11-06 13:25:57.694864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.895 [2024-11-06 13:25:57.694880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.895 [2024-11-06 13:25:57.694885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.695034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.895 [2024-11-06 13:25:57.695183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.895 [2024-11-06 13:25:57.695188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.895 [2024-11-06 13:25:57.695193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.895 [2024-11-06 13:25:57.695198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.895 [2024-11-06 13:25:57.707058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.895 [2024-11-06 13:25:57.707577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.895 [2024-11-06 13:25:57.707589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.895 [2024-11-06 13:25:57.707595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.707743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.895 [2024-11-06 13:25:57.707899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.895 [2024-11-06 13:25:57.707904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.895 [2024-11-06 13:25:57.707910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.895 [2024-11-06 13:25:57.707915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.895 [2024-11-06 13:25:57.719752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.895 [2024-11-06 13:25:57.720201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.895 [2024-11-06 13:25:57.720213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.895 [2024-11-06 13:25:57.720219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.720366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.895 [2024-11-06 13:25:57.720515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.895 [2024-11-06 13:25:57.720520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.895 [2024-11-06 13:25:57.720525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.895 [2024-11-06 13:25:57.720530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.895 [2024-11-06 13:25:57.732398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.895 [2024-11-06 13:25:57.732786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.895 [2024-11-06 13:25:57.732800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.895 [2024-11-06 13:25:57.732809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.732957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.895 [2024-11-06 13:25:57.733105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.895 [2024-11-06 13:25:57.733111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.895 [2024-11-06 13:25:57.733116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.895 [2024-11-06 13:25:57.733121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.895 [2024-11-06 13:25:57.745094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.895 [2024-11-06 13:25:57.745431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.895 [2024-11-06 13:25:57.745443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.895 [2024-11-06 13:25:57.745448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.895 [2024-11-06 13:25:57.745597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.896 [2024-11-06 13:25:57.745750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.896 [2024-11-06 13:25:57.745757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.896 [2024-11-06 13:25:57.745762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.896 [2024-11-06 13:25:57.745767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.896 [2024-11-06 13:25:57.757739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.896 [2024-11-06 13:25:57.758196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.896 [2024-11-06 13:25:57.758208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.896 [2024-11-06 13:25:57.758214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.896 [2024-11-06 13:25:57.758362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.896 [2024-11-06 13:25:57.758510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.896 [2024-11-06 13:25:57.758516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.896 [2024-11-06 13:25:57.758521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.896 [2024-11-06 13:25:57.758526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.896 [2024-11-06 13:25:57.770359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.896 [2024-11-06 13:25:57.770907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.896 [2024-11-06 13:25:57.770937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.896 [2024-11-06 13:25:57.770946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.896 [2024-11-06 13:25:57.771112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.896 [2024-11-06 13:25:57.771268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.896 [2024-11-06 13:25:57.771274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.896 [2024-11-06 13:25:57.771280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.896 [2024-11-06 13:25:57.771286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:15.896 [2024-11-06 13:25:57.782995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:15.896 [2024-11-06 13:25:57.783568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.896 [2024-11-06 13:25:57.783598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:15.896 [2024-11-06 13:25:57.783607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:15.896 [2024-11-06 13:25:57.783777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:15.896 [2024-11-06 13:25:57.783930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:15.896 [2024-11-06 13:25:57.783936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:15.896 [2024-11-06 13:25:57.783941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:15.896 [2024-11-06 13:25:57.783948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.158 [2024-11-06 13:25:57.795649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.158 [2024-11-06 13:25:57.796294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.158 [2024-11-06 13:25:57.796324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.158 [2024-11-06 13:25:57.796333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.158 [2024-11-06 13:25:57.796497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.158 [2024-11-06 13:25:57.796656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.158 [2024-11-06 13:25:57.796663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.158 [2024-11-06 13:25:57.796669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.158 [2024-11-06 13:25:57.796675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.158 [2024-11-06 13:25:57.808236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.158 [2024-11-06 13:25:57.808790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.158 [2024-11-06 13:25:57.808820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.158 [2024-11-06 13:25:57.808829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.158 [2024-11-06 13:25:57.808996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.158 [2024-11-06 13:25:57.809148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.158 [2024-11-06 13:25:57.809154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.158 [2024-11-06 13:25:57.809163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.158 [2024-11-06 13:25:57.809170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.158 [2024-11-06 13:25:57.820876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.158 [2024-11-06 13:25:57.821474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.158 [2024-11-06 13:25:57.821505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.158 [2024-11-06 13:25:57.821513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.158 [2024-11-06 13:25:57.821678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.158 [2024-11-06 13:25:57.821835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.158 [2024-11-06 13:25:57.821842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.158 [2024-11-06 13:25:57.821848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.158 [2024-11-06 13:25:57.821854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.158 [2024-11-06 13:25:57.833553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.158 [2024-11-06 13:25:57.834040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.158 [2024-11-06 13:25:57.834055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.158 [2024-11-06 13:25:57.834061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.158 [2024-11-06 13:25:57.834211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.158 [2024-11-06 13:25:57.834359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.158 [2024-11-06 13:25:57.834365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.158 [2024-11-06 13:25:57.834370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.158 [2024-11-06 13:25:57.834375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.158 [2024-11-06 13:25:57.846213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.158 [2024-11-06 13:25:57.846674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.158 [2024-11-06 13:25:57.846687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.158 [2024-11-06 13:25:57.846692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.158 [2024-11-06 13:25:57.846844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.158 [2024-11-06 13:25:57.846993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.158 [2024-11-06 13:25:57.846999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.158 [2024-11-06 13:25:57.847004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.158 [2024-11-06 13:25:57.847008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.158 [2024-11-06 13:25:57.858844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.158 [2024-11-06 13:25:57.859373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.158 [2024-11-06 13:25:57.859403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.158 [2024-11-06 13:25:57.859412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.158 [2024-11-06 13:25:57.859575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.158 [2024-11-06 13:25:57.859727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.158 [2024-11-06 13:25:57.859733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.158 [2024-11-06 13:25:57.859739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.859751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 [2024-11-06 13:25:57.871454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.871866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.159 [2024-11-06 13:25:57.871897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.159 [2024-11-06 13:25:57.871906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.159 [2024-11-06 13:25:57.872072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.159 [2024-11-06 13:25:57.872224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.159 [2024-11-06 13:25:57.872230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.159 [2024-11-06 13:25:57.872236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.872242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 [2024-11-06 13:25:57.884095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.884761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.159 [2024-11-06 13:25:57.884791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.159 [2024-11-06 13:25:57.884800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.159 [2024-11-06 13:25:57.884967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.159 [2024-11-06 13:25:57.885119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.159 [2024-11-06 13:25:57.885125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.159 [2024-11-06 13:25:57.885130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.885135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 7050.25 IOPS, 27.54 MiB/s [2024-11-06T12:25:58.061Z] [2024-11-06 13:25:57.896718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.897210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.159 [2024-11-06 13:25:57.897241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.159 [2024-11-06 13:25:57.897256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.159 [2024-11-06 13:25:57.897428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.159 [2024-11-06 13:25:57.897580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.159 [2024-11-06 13:25:57.897588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.159 [2024-11-06 13:25:57.897594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.897601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 [2024-11-06 13:25:57.909342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.909841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.159 [2024-11-06 13:25:57.909871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.159 [2024-11-06 13:25:57.909879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.159 [2024-11-06 13:25:57.910046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.159 [2024-11-06 13:25:57.910197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.159 [2024-11-06 13:25:57.910203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.159 [2024-11-06 13:25:57.910209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.910215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 [2024-11-06 13:25:57.921924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.922500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.159 [2024-11-06 13:25:57.922531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.159 [2024-11-06 13:25:57.922539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.159 [2024-11-06 13:25:57.922704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.159 [2024-11-06 13:25:57.922862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.159 [2024-11-06 13:25:57.922869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.159 [2024-11-06 13:25:57.922875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.922881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 [2024-11-06 13:25:57.934582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.935144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.159 [2024-11-06 13:25:57.935174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.159 [2024-11-06 13:25:57.935183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.159 [2024-11-06 13:25:57.935347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.159 [2024-11-06 13:25:57.935503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.159 [2024-11-06 13:25:57.935510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.159 [2024-11-06 13:25:57.935516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.935521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 [2024-11-06 13:25:57.947230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.947721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.159 [2024-11-06 13:25:57.947735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.159 [2024-11-06 13:25:57.947741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.159 [2024-11-06 13:25:57.947894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.159 [2024-11-06 13:25:57.948043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.159 [2024-11-06 13:25:57.948049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.159 [2024-11-06 13:25:57.948054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.948059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 [2024-11-06 13:25:57.959900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.960349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.159 [2024-11-06 13:25:57.960363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.159 [2024-11-06 13:25:57.960368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.159 [2024-11-06 13:25:57.960516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.159 [2024-11-06 13:25:57.960664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.159 [2024-11-06 13:25:57.960670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.159 [2024-11-06 13:25:57.960675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.960680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 [2024-11-06 13:25:57.972522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.972924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.159 [2024-11-06 13:25:57.972954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.159 [2024-11-06 13:25:57.972962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.159 [2024-11-06 13:25:57.973128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.159 [2024-11-06 13:25:57.973280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.159 [2024-11-06 13:25:57.973286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.159 [2024-11-06 13:25:57.973295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.159 [2024-11-06 13:25:57.973301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.159 [2024-11-06 13:25:57.985214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.159 [2024-11-06 13:25:57.985791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.160 [2024-11-06 13:25:57.985821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.160 [2024-11-06 13:25:57.985830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.160 [2024-11-06 13:25:57.985996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.160 [2024-11-06 13:25:57.986148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.160 [2024-11-06 13:25:57.986154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.160 [2024-11-06 13:25:57.986159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.160 [2024-11-06 13:25:57.986165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.160 [2024-11-06 13:25:57.997893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.160 [2024-11-06 13:25:57.998485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.160 [2024-11-06 13:25:57.998515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.160 [2024-11-06 13:25:57.998524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.160 [2024-11-06 13:25:57.998688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.160 [2024-11-06 13:25:57.998846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.160 [2024-11-06 13:25:57.998853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.160 [2024-11-06 13:25:57.998858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.160 [2024-11-06 13:25:57.998865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.160 [2024-11-06 13:25:58.010567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.160 [2024-11-06 13:25:58.011065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.160 [2024-11-06 13:25:58.011081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.160 [2024-11-06 13:25:58.011087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.160 [2024-11-06 13:25:58.011235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.160 [2024-11-06 13:25:58.011384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.160 [2024-11-06 13:25:58.011390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.160 [2024-11-06 13:25:58.011395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.160 [2024-11-06 13:25:58.011400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.160 [2024-11-06 13:25:58.023245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.160 [2024-11-06 13:25:58.023836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.160 [2024-11-06 13:25:58.023866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.160 [2024-11-06 13:25:58.023875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.160 [2024-11-06 13:25:58.024041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.160 [2024-11-06 13:25:58.024193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.160 [2024-11-06 13:25:58.024200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.160 [2024-11-06 13:25:58.024205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.160 [2024-11-06 13:25:58.024211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.160 [2024-11-06 13:25:58.035922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.160 [2024-11-06 13:25:58.036464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.160 [2024-11-06 13:25:58.036494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.160 [2024-11-06 13:25:58.036503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.160 [2024-11-06 13:25:58.036667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.160 [2024-11-06 13:25:58.036825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.160 [2024-11-06 13:25:58.036832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.160 [2024-11-06 13:25:58.036838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.160 [2024-11-06 13:25:58.036843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.160 [2024-11-06 13:25:58.048548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.160 [2024-11-06 13:25:58.049102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.160 [2024-11-06 13:25:58.049133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.160 [2024-11-06 13:25:58.049142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.160 [2024-11-06 13:25:58.049306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.160 [2024-11-06 13:25:58.049458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.160 [2024-11-06 13:25:58.049465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.160 [2024-11-06 13:25:58.049470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.160 [2024-11-06 13:25:58.049476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.422 [2024-11-06 13:25:58.061185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.422 [2024-11-06 13:25:58.061675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.422 [2024-11-06 13:25:58.061689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.422 [2024-11-06 13:25:58.061699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.061853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.062002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.062008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.062013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.062018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.073854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.074308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.074321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.423 [2024-11-06 13:25:58.074326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.074474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.074623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.074629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.074634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.074639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.086469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.086973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.086986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.423 [2024-11-06 13:25:58.086991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.087139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.087287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.087293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.087298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.087303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.099153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.099602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.099615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.423 [2024-11-06 13:25:58.099620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.099773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.099922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.099931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.099936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.099941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.111774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.112336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.112366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.423 [2024-11-06 13:25:58.112375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.112539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.112691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.112697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.112703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.112708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.124410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.124999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.125029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.423 [2024-11-06 13:25:58.125038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.125202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.125353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.125360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.125365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.125371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.137070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.137660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.137691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.423 [2024-11-06 13:25:58.137699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.137873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.138025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.138032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.138037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.138046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.149738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.150292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.150321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.423 [2024-11-06 13:25:58.150330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.150494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.150646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.150652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.150658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.150664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.162362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.162826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.162856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.423 [2024-11-06 13:25:58.162865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.163032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.163183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.163189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.163194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.163200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.175083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.175659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.175689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.423 [2024-11-06 13:25:58.175697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.423 [2024-11-06 13:25:58.175871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.423 [2024-11-06 13:25:58.176024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.423 [2024-11-06 13:25:58.176030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.423 [2024-11-06 13:25:58.176036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.423 [2024-11-06 13:25:58.176041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.423 [2024-11-06 13:25:58.187733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.423 [2024-11-06 13:25:58.188266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.423 [2024-11-06 13:25:58.188296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.188305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.188469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.188621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.188627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.188633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.188639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.200357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.424 [2024-11-06 13:25:58.200865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-11-06 13:25:58.200895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.200904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.201070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.201222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.201228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.201234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.201239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.212939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.424 [2024-11-06 13:25:58.213464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-11-06 13:25:58.213494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.213503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.213667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.213825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.213832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.213838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.213843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.225542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.424 [2024-11-06 13:25:58.226042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-11-06 13:25:58.226057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.226063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.226215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.226364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.226369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.226374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.226379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.238207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.424 [2024-11-06 13:25:58.238681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-11-06 13:25:58.238694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.238700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.238852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.239001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.239007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.239012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.239017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.250843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.424 [2024-11-06 13:25:58.251414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-11-06 13:25:58.251444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.251453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.251617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.251777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.251784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.251789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.251795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.263487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.424 [2024-11-06 13:25:58.264073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-11-06 13:25:58.264103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.264111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.264276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.264427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.264437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.264443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.264449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.276149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.424 [2024-11-06 13:25:58.276660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-11-06 13:25:58.276690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.276699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.276873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.277025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.277031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.277037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.277043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.288737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.424 [2024-11-06 13:25:58.289308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-11-06 13:25:58.289338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.289347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.289511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.289662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.289669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.289674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.289680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.301405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.424 [2024-11-06 13:25:58.302040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.424 [2024-11-06 13:25:58.302071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.424 [2024-11-06 13:25:58.302079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.424 [2024-11-06 13:25:58.302243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.424 [2024-11-06 13:25:58.302395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.424 [2024-11-06 13:25:58.302401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.424 [2024-11-06 13:25:58.302407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.424 [2024-11-06 13:25:58.302416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.424 [2024-11-06 13:25:58.314115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.425 [2024-11-06 13:25:58.314553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.425 [2024-11-06 13:25:58.314582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.425 [2024-11-06 13:25:58.314590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.425 [2024-11-06 13:25:58.314762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.425 [2024-11-06 13:25:58.314913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.425 [2024-11-06 13:25:58.314919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.425 [2024-11-06 13:25:58.314925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.425 [2024-11-06 13:25:58.314931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.687 [2024-11-06 13:25:58.326774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.687 [2024-11-06 13:25:58.327311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.687 [2024-11-06 13:25:58.327341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.687 [2024-11-06 13:25:58.327350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.687 [2024-11-06 13:25:58.327514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.687 [2024-11-06 13:25:58.327665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.687 [2024-11-06 13:25:58.327672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.687 [2024-11-06 13:25:58.327677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.687 [2024-11-06 13:25:58.327683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.687 [2024-11-06 13:25:58.339381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.687 [2024-11-06 13:25:58.339877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.687 [2024-11-06 13:25:58.339907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.687 [2024-11-06 13:25:58.339915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.687 [2024-11-06 13:25:58.340082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.687 [2024-11-06 13:25:58.340234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.687 [2024-11-06 13:25:58.340240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.687 [2024-11-06 13:25:58.340245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.687 [2024-11-06 13:25:58.340252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.687 [2024-11-06 13:25:58.351950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.687 [2024-11-06 13:25:58.352528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.687 [2024-11-06 13:25:58.352558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.687 [2024-11-06 13:25:58.352567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.687 [2024-11-06 13:25:58.352731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.687 [2024-11-06 13:25:58.352891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.687 [2024-11-06 13:25:58.352899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.687 [2024-11-06 13:25:58.352904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.687 [2024-11-06 13:25:58.352910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.687 [2024-11-06 13:25:58.364604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.687 [2024-11-06 13:25:58.365180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.687 [2024-11-06 13:25:58.365210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.687 [2024-11-06 13:25:58.365219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.687 [2024-11-06 13:25:58.365383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.687 [2024-11-06 13:25:58.365534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.687 [2024-11-06 13:25:58.365540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.687 [2024-11-06 13:25:58.365546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.687 [2024-11-06 13:25:58.365551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.687 [2024-11-06 13:25:58.377252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.687 [2024-11-06 13:25:58.377794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.687 [2024-11-06 13:25:58.377815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.687 [2024-11-06 13:25:58.377821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.687 [2024-11-06 13:25:58.377976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.687 [2024-11-06 13:25:58.378126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.687 [2024-11-06 13:25:58.378131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.687 [2024-11-06 13:25:58.378136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.687 [2024-11-06 13:25:58.378142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.687 [2024-11-06 13:25:58.389831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.687 [2024-11-06 13:25:58.390364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.687 [2024-11-06 13:25:58.390394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.687 [2024-11-06 13:25:58.390402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.687 [2024-11-06 13:25:58.390570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.687 [2024-11-06 13:25:58.390722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.687 [2024-11-06 13:25:58.390728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.687 [2024-11-06 13:25:58.390734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.687 [2024-11-06 13:25:58.390739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.687 [2024-11-06 13:25:58.402453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.687 [2024-11-06 13:25:58.403054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.687 [2024-11-06 13:25:58.403084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.403093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.403257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.403408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.403415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.403420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.403426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.415122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.415691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.415721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.415729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.415900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.416052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.416059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.416064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.416070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.427762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.428309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.428339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.428348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.428512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.428664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.428674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.428679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.428685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.440386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.440883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.440913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.440922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.441088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.441240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.441246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.441252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.441257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.452956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.453523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.453553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.453561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.453725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.453883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.453890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.453897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.453903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.465593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.466178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.466208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.466217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.466381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.466533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.466540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.466546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.466555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.478267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.478739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.478774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.478783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.478950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.479101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.479108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.479113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.479119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.490951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.491488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.491519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.491527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.491691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.491851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.491858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.491863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.491869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.503580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.504144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.504174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.504183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.504346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.504498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.504504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.504510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.504515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.516154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.516654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.516676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.688 [2024-11-06 13:25:58.516682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.688 [2024-11-06 13:25:58.516837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.688 [2024-11-06 13:25:58.516986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.688 [2024-11-06 13:25:58.516992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.688 [2024-11-06 13:25:58.516997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.688 [2024-11-06 13:25:58.517002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.688 [2024-11-06 13:25:58.528828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.688 [2024-11-06 13:25:58.529040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.688 [2024-11-06 13:25:58.529054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.689 [2024-11-06 13:25:58.529059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.689 [2024-11-06 13:25:58.529208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.689 [2024-11-06 13:25:58.529356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.689 [2024-11-06 13:25:58.529362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.689 [2024-11-06 13:25:58.529367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.689 [2024-11-06 13:25:58.529372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.689 [2024-11-06 13:25:58.541482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.689 [2024-11-06 13:25:58.542045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.689 [2024-11-06 13:25:58.542075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.689 [2024-11-06 13:25:58.542083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.689 [2024-11-06 13:25:58.542247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.689 [2024-11-06 13:25:58.542399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.689 [2024-11-06 13:25:58.542405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.689 [2024-11-06 13:25:58.542411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.689 [2024-11-06 13:25:58.542416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.689 [2024-11-06 13:25:58.554116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.689 [2024-11-06 13:25:58.554707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.689 [2024-11-06 13:25:58.554737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.689 [2024-11-06 13:25:58.554752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.689 [2024-11-06 13:25:58.554920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.689 [2024-11-06 13:25:58.555072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.689 [2024-11-06 13:25:58.555078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.689 [2024-11-06 13:25:58.555083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.689 [2024-11-06 13:25:58.555089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.689 [2024-11-06 13:25:58.566786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.689 [2024-11-06 13:25:58.567350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.689 [2024-11-06 13:25:58.567380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.689 [2024-11-06 13:25:58.567389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.689 [2024-11-06 13:25:58.567553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.689 [2024-11-06 13:25:58.567704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.689 [2024-11-06 13:25:58.567710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.689 [2024-11-06 13:25:58.567716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.689 [2024-11-06 13:25:58.567721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.689 [2024-11-06 13:25:58.579419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.689 [2024-11-06 13:25:58.580050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.689 [2024-11-06 13:25:58.580080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.689 [2024-11-06 13:25:58.580089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.689 [2024-11-06 13:25:58.580252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.689 [2024-11-06 13:25:58.580404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.689 [2024-11-06 13:25:58.580411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.689 [2024-11-06 13:25:58.580416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.689 [2024-11-06 13:25:58.580422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.591990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.592592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.952 [2024-11-06 13:25:58.592624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.952 [2024-11-06 13:25:58.592632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.952 [2024-11-06 13:25:58.592804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.952 [2024-11-06 13:25:58.592956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.952 [2024-11-06 13:25:58.592967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.952 [2024-11-06 13:25:58.592973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.952 [2024-11-06 13:25:58.592979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.604688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.605202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.952 [2024-11-06 13:25:58.605217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.952 [2024-11-06 13:25:58.605223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.952 [2024-11-06 13:25:58.605372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.952 [2024-11-06 13:25:58.605520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.952 [2024-11-06 13:25:58.605526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.952 [2024-11-06 13:25:58.605531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.952 [2024-11-06 13:25:58.605536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.617363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.617722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.952 [2024-11-06 13:25:58.617736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.952 [2024-11-06 13:25:58.617741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.952 [2024-11-06 13:25:58.617896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.952 [2024-11-06 13:25:58.618044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.952 [2024-11-06 13:25:58.618049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.952 [2024-11-06 13:25:58.618054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.952 [2024-11-06 13:25:58.618059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.630027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.630578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.952 [2024-11-06 13:25:58.630609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.952 [2024-11-06 13:25:58.630617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.952 [2024-11-06 13:25:58.630788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.952 [2024-11-06 13:25:58.630939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.952 [2024-11-06 13:25:58.630946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.952 [2024-11-06 13:25:58.630951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.952 [2024-11-06 13:25:58.630957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.642661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.643192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.952 [2024-11-06 13:25:58.643222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.952 [2024-11-06 13:25:58.643231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.952 [2024-11-06 13:25:58.643395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.952 [2024-11-06 13:25:58.643547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.952 [2024-11-06 13:25:58.643553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.952 [2024-11-06 13:25:58.643559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.952 [2024-11-06 13:25:58.643564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.655263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.655837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.952 [2024-11-06 13:25:58.655868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.952 [2024-11-06 13:25:58.655876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.952 [2024-11-06 13:25:58.656043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.952 [2024-11-06 13:25:58.656194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.952 [2024-11-06 13:25:58.656200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.952 [2024-11-06 13:25:58.656206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.952 [2024-11-06 13:25:58.656211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.667917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.668476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.952 [2024-11-06 13:25:58.668506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.952 [2024-11-06 13:25:58.668515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.952 [2024-11-06 13:25:58.668679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.952 [2024-11-06 13:25:58.668836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.952 [2024-11-06 13:25:58.668844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.952 [2024-11-06 13:25:58.668849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.952 [2024-11-06 13:25:58.668855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.680555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.681131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.952 [2024-11-06 13:25:58.681164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.952 [2024-11-06 13:25:58.681173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.952 [2024-11-06 13:25:58.681337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.952 [2024-11-06 13:25:58.681489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.952 [2024-11-06 13:25:58.681495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.952 [2024-11-06 13:25:58.681500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.952 [2024-11-06 13:25:58.681506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.693213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.693790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.952 [2024-11-06 13:25:58.693821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.952 [2024-11-06 13:25:58.693829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.952 [2024-11-06 13:25:58.693996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.952 [2024-11-06 13:25:58.694148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.952 [2024-11-06 13:25:58.694154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.952 [2024-11-06 13:25:58.694159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.952 [2024-11-06 13:25:58.694165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.952 [2024-11-06 13:25:58.705883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.952 [2024-11-06 13:25:58.706421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.706452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.706460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.706624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.706784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.706791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.706797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.706802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.718494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.718993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.719024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.719033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.719203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.719355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.719361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.719367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.719373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.731079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.731585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.731599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.731604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.731756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.731905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.731911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.731916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.731921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.743767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.744251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.744264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.744269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.744418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.744566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.744571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.744576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.744581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.756357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.756675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.756690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.756695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.756850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.756999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.757005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.757013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.757017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.768987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.769573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.769604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.769612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.769784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.769936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.769942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.769948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.769953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.781646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.782197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.782227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.782236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.782400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.782551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.782558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.782563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.782569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.794273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.794840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.794870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.794879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.795045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.795196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.795202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.795208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.795214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.806933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.807500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.807530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.807539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.807703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.807862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.807869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.807875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.807881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.819572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.820151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.953 [2024-11-06 13:25:58.820182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.953 [2024-11-06 13:25:58.820190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.953 [2024-11-06 13:25:58.820354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.953 [2024-11-06 13:25:58.820506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.953 [2024-11-06 13:25:58.820512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.953 [2024-11-06 13:25:58.820518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.953 [2024-11-06 13:25:58.820523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.953 [2024-11-06 13:25:58.832224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.953 [2024-11-06 13:25:58.832705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.954 [2024-11-06 13:25:58.832720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.954 [2024-11-06 13:25:58.832726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.954 [2024-11-06 13:25:58.832880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.954 [2024-11-06 13:25:58.833029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.954 [2024-11-06 13:25:58.833035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.954 [2024-11-06 13:25:58.833040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.954 [2024-11-06 13:25:58.833045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:16.954 [2024-11-06 13:25:58.844873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:16.954 [2024-11-06 13:25:58.845328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.954 [2024-11-06 13:25:58.845340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:16.954 [2024-11-06 13:25:58.845349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:16.954 [2024-11-06 13:25:58.845498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:16.954 [2024-11-06 13:25:58.845646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:16.954 [2024-11-06 13:25:58.845651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:16.954 [2024-11-06 13:25:58.845656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:16.954 [2024-11-06 13:25:58.845661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.215 [2024-11-06 13:25:58.857505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.215 [2024-11-06 13:25:58.858075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.215 [2024-11-06 13:25:58.858105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.215 [2024-11-06 13:25:58.858114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.215 [2024-11-06 13:25:58.858278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.215 [2024-11-06 13:25:58.858430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.215 [2024-11-06 13:25:58.858436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.215 [2024-11-06 13:25:58.858442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.215 [2024-11-06 13:25:58.858448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.215 [2024-11-06 13:25:58.870149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.215 [2024-11-06 13:25:58.870715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.215 [2024-11-06 13:25:58.870752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.215 [2024-11-06 13:25:58.870761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.215 [2024-11-06 13:25:58.870925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.871077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.871083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.871088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.871094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 [2024-11-06 13:25:58.882791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.883262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.883293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.883301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.216 [2024-11-06 13:25:58.883465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.883620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.883627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.883632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.883638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 5640.20 IOPS, 22.03 MiB/s [2024-11-06T12:25:59.118Z] [2024-11-06 13:25:58.896492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.897077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.897108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.897117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.216 [2024-11-06 13:25:58.897281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.897432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.897439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.897444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.897450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 [2024-11-06 13:25:58.909191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.909649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.909680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.909689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.216 [2024-11-06 13:25:58.909860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.910012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.910018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.910024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.910030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 [2024-11-06 13:25:58.921888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.922460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.922490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.922499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.216 [2024-11-06 13:25:58.922663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.922822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.922829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.922838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.922844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 [2024-11-06 13:25:58.934558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.935157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.935188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.935197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.216 [2024-11-06 13:25:58.935361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.935513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.935519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.935525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.935530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 [2024-11-06 13:25:58.947241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.947777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.947807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.947816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.216 [2024-11-06 13:25:58.947980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.948132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.948139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.948144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.948150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 [2024-11-06 13:25:58.959860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.960329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.960343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.960349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.216 [2024-11-06 13:25:58.960497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.960646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.960651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.960656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.960661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 [2024-11-06 13:25:58.972504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.973040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.973070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.973079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.216 [2024-11-06 13:25:58.973245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.973397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.973403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.973409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.973414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 [2024-11-06 13:25:58.985134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.985713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.985743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.985758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.216 [2024-11-06 13:25:58.985922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.216 [2024-11-06 13:25:58.986073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.216 [2024-11-06 13:25:58.986080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.216 [2024-11-06 13:25:58.986085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.216 [2024-11-06 13:25:58.986091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.216 [2024-11-06 13:25:58.997792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.216 [2024-11-06 13:25:58.998262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.216 [2024-11-06 13:25:58.998276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.216 [2024-11-06 13:25:58.998282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:58.998430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:58.998578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:58.998584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:58.998589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:58.998594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.217 [2024-11-06 13:25:59.010474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.217 [2024-11-06 13:25:59.011047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.217 [2024-11-06 13:25:59.011078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.217 [2024-11-06 13:25:59.011090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:59.011254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:59.011406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:59.011412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:59.011417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:59.011423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.217 [2024-11-06 13:25:59.023120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.217 [2024-11-06 13:25:59.023690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.217 [2024-11-06 13:25:59.023720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.217 [2024-11-06 13:25:59.023728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:59.023900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:59.024052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:59.024058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:59.024064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:59.024070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.217 [2024-11-06 13:25:59.035764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.217 [2024-11-06 13:25:59.036316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.217 [2024-11-06 13:25:59.036346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.217 [2024-11-06 13:25:59.036355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:59.036519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:59.036670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:59.036676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:59.036682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:59.036688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.217 [2024-11-06 13:25:59.048386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.217 [2024-11-06 13:25:59.048932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.217 [2024-11-06 13:25:59.048962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.217 [2024-11-06 13:25:59.048971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:59.049135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:59.049290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:59.049297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:59.049302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:59.049308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.217 [2024-11-06 13:25:59.061008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.217 [2024-11-06 13:25:59.061488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.217 [2024-11-06 13:25:59.061502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.217 [2024-11-06 13:25:59.061508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:59.061656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:59.061811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:59.061817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:59.061822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:59.061827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.217 [2024-11-06 13:25:59.073658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.217 [2024-11-06 13:25:59.074116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.217 [2024-11-06 13:25:59.074129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.217 [2024-11-06 13:25:59.074134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:59.074282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:59.074430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:59.074436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:59.074441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:59.074446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.217 [2024-11-06 13:25:59.086282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.217 [2024-11-06 13:25:59.086811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.217 [2024-11-06 13:25:59.086842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.217 [2024-11-06 13:25:59.086850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:59.087017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:59.087169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:59.087175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:59.087184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:59.087190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.217 [2024-11-06 13:25:59.098889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.217 [2024-11-06 13:25:59.099371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.217 [2024-11-06 13:25:59.099386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.217 [2024-11-06 13:25:59.099391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:59.099539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:59.099688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:59.099693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:59.099699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:59.099703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.217 [2024-11-06 13:25:59.111555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.217 [2024-11-06 13:25:59.112018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.217 [2024-11-06 13:25:59.112031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.217 [2024-11-06 13:25:59.112037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.217 [2024-11-06 13:25:59.112185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.217 [2024-11-06 13:25:59.112333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.217 [2024-11-06 13:25:59.112339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.217 [2024-11-06 13:25:59.112344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.217 [2024-11-06 13:25:59.112349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.479 [2024-11-06 13:25:59.124181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.479 [2024-11-06 13:25:59.124766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.479 [2024-11-06 13:25:59.124796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.479 [2024-11-06 13:25:59.124805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.479 [2024-11-06 13:25:59.124969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.479 [2024-11-06 13:25:59.125120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.479 [2024-11-06 13:25:59.125127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.479 [2024-11-06 13:25:59.125132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.479 [2024-11-06 13:25:59.125138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.479 [2024-11-06 13:25:59.136862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.479 [2024-11-06 13:25:59.137344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.479 [2024-11-06 13:25:59.137358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.479 [2024-11-06 13:25:59.137364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.479 [2024-11-06 13:25:59.137513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.479 [2024-11-06 13:25:59.137662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.479 [2024-11-06 13:25:59.137667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.479 [2024-11-06 13:25:59.137672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.137677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.149532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.149849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.149865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.149870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.150019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.150167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.150173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.480 [2024-11-06 13:25:59.150178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.150183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.162178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.162770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.162801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.162809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.162974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.163125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.163132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.480 [2024-11-06 13:25:59.163138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.163144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.174851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.175331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.175345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.175355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.175504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.175653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.175659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.480 [2024-11-06 13:25:59.175664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.175669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.187538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.187994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.188008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.188013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.188161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.188310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.188316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.480 [2024-11-06 13:25:59.188320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.188325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.200185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.200786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.200817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.200825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.200989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.201149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.201156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.480 [2024-11-06 13:25:59.201162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.201167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.212883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.213338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.213353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.213358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.213507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.213659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.213665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.480 [2024-11-06 13:25:59.213670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.213675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.225513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.225986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.225999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.226005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.226153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.226301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.226307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.480 [2024-11-06 13:25:59.226312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.226316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.238152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.238473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.238486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.238491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.238639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.238791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.238797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.480 [2024-11-06 13:25:59.238802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.238807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.250782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.251322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.251353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.251361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.251525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.251677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.251684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.480 [2024-11-06 13:25:59.251692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.480 [2024-11-06 13:25:59.251698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.480 [2024-11-06 13:25:59.263399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.480 [2024-11-06 13:25:59.263854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.480 [2024-11-06 13:25:59.263870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.480 [2024-11-06 13:25:59.263875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.480 [2024-11-06 13:25:59.264024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.480 [2024-11-06 13:25:59.264173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.480 [2024-11-06 13:25:59.264179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.481 [2024-11-06 13:25:59.264184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.481 [2024-11-06 13:25:59.264189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.481 [2024-11-06 13:25:59.276026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.481 [2024-11-06 13:25:59.276567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.481 [2024-11-06 13:25:59.276597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.481 [2024-11-06 13:25:59.276605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.481 [2024-11-06 13:25:59.276775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.481 [2024-11-06 13:25:59.276927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.481 [2024-11-06 13:25:59.276933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.481 [2024-11-06 13:25:59.276939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.481 [2024-11-06 13:25:59.276945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.481 [2024-11-06 13:25:59.288649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.481 [2024-11-06 13:25:59.289211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.481 [2024-11-06 13:25:59.289242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.481 [2024-11-06 13:25:59.289250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.481 [2024-11-06 13:25:59.289414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.481 [2024-11-06 13:25:59.289566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.481 [2024-11-06 13:25:59.289572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.481 [2024-11-06 13:25:59.289578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.481 [2024-11-06 13:25:59.289584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.481 [2024-11-06 13:25:59.301290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.481 [2024-11-06 13:25:59.301755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.481 [2024-11-06 13:25:59.301771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.481 [2024-11-06 13:25:59.301776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.481 [2024-11-06 13:25:59.301925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.481 [2024-11-06 13:25:59.302074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.481 [2024-11-06 13:25:59.302080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.481 [2024-11-06 13:25:59.302085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.481 [2024-11-06 13:25:59.302089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.481 [2024-11-06 13:25:59.313935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.481 [2024-11-06 13:25:59.314423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.481 [2024-11-06 13:25:59.314436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.481 [2024-11-06 13:25:59.314441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.481 [2024-11-06 13:25:59.314590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.481 [2024-11-06 13:25:59.314738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.481 [2024-11-06 13:25:59.314743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.481 [2024-11-06 13:25:59.314753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.481 [2024-11-06 13:25:59.314759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.481 [2024-11-06 13:25:59.326587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.481 [2024-11-06 13:25:59.327081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.481 [2024-11-06 13:25:59.327094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.481 [2024-11-06 13:25:59.327099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.481 [2024-11-06 13:25:59.327247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.481 [2024-11-06 13:25:59.327396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.481 [2024-11-06 13:25:59.327401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.481 [2024-11-06 13:25:59.327406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.481 [2024-11-06 13:25:59.327411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.481 [2024-11-06 13:25:59.339247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.481 [2024-11-06 13:25:59.339818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.481 [2024-11-06 13:25:59.339848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.481 [2024-11-06 13:25:59.339860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.481 [2024-11-06 13:25:59.340027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.481 [2024-11-06 13:25:59.340178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.481 [2024-11-06 13:25:59.340184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.481 [2024-11-06 13:25:59.340190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.481 [2024-11-06 13:25:59.340196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.481 [2024-11-06 13:25:59.351904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.481 [2024-11-06 13:25:59.352268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.481 [2024-11-06 13:25:59.352284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.481 [2024-11-06 13:25:59.352290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.481 [2024-11-06 13:25:59.352439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.481 [2024-11-06 13:25:59.352587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.481 [2024-11-06 13:25:59.352593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.481 [2024-11-06 13:25:59.352598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.481 [2024-11-06 13:25:59.352603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.481 [2024-11-06 13:25:59.364583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.481 [2024-11-06 13:25:59.365154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.481 [2024-11-06 13:25:59.365185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.481 [2024-11-06 13:25:59.365194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.481 [2024-11-06 13:25:59.365357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.481 [2024-11-06 13:25:59.365509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.481 [2024-11-06 13:25:59.365515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.481 [2024-11-06 13:25:59.365521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.481 [2024-11-06 13:25:59.365527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.481 [2024-11-06 13:25:59.377231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.481 [2024-11-06 13:25:59.377729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.481 [2024-11-06 13:25:59.377748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.481 [2024-11-06 13:25:59.377756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.481 [2024-11-06 13:25:59.377905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.744 [2024-11-06 13:25:59.378058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.744 [2024-11-06 13:25:59.378066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.744 [2024-11-06 13:25:59.378071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.745 [2024-11-06 13:25:59.378076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.745 [2024-11-06 13:25:59.389914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.745 [2024-11-06 13:25:59.390247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.745 [2024-11-06 13:25:59.390260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.745 [2024-11-06 13:25:59.390265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.745 [2024-11-06 13:25:59.390413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.745 [2024-11-06 13:25:59.390561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.745 [2024-11-06 13:25:59.390566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.745 [2024-11-06 13:25:59.390571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.745 [2024-11-06 13:25:59.390576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.745 [2024-11-06 13:25:59.402562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.745 [2024-11-06 13:25:59.402990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.745 [2024-11-06 13:25:59.403020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.745 [2024-11-06 13:25:59.403028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.745 [2024-11-06 13:25:59.403195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.745 [2024-11-06 13:25:59.403347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.745 [2024-11-06 13:25:59.403353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.745 [2024-11-06 13:25:59.403359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.745 [2024-11-06 13:25:59.403364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.745 [2024-11-06 13:25:59.415224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.745 [2024-11-06 13:25:59.415815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.745 [2024-11-06 13:25:59.415846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.745 [2024-11-06 13:25:59.415855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.745 [2024-11-06 13:25:59.416021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.745 [2024-11-06 13:25:59.416173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.745 [2024-11-06 13:25:59.416179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.745 [2024-11-06 13:25:59.416185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.745 [2024-11-06 13:25:59.416194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.745 [2024-11-06 13:25:59.427901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.745 [2024-11-06 13:25:59.428355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.745 [2024-11-06 13:25:59.428370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.745 [2024-11-06 13:25:59.428376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.745 [2024-11-06 13:25:59.428524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.745 [2024-11-06 13:25:59.428672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.745 [2024-11-06 13:25:59.428678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.745 [2024-11-06 13:25:59.428683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.745 [2024-11-06 13:25:59.428688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1909389 Killed "${NVMF_APP[@]}" "$@" 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:17.745 [2024-11-06 13:25:59.440526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.745 [2024-11-06 13:25:59.440999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.745 [2024-11-06 13:25:59.441013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.745 [2024-11-06 13:25:59.441018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.745 [2024-11-06 13:25:59.441166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.745 [2024-11-06 13:25:59.441314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.745 [2024-11-06 13:25:59.441320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.745 [2024-11-06 13:25:59.441324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.745 [2024-11-06 13:25:59.441329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1911023 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1911023 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1911023 ']' 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:17.745 13:25:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.745 [2024-11-06 13:25:59.453203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.745 [2024-11-06 13:25:59.453672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.745 [2024-11-06 13:25:59.453703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.745 [2024-11-06 13:25:59.453711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.745 [2024-11-06 13:25:59.453882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.745 [2024-11-06 13:25:59.454035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.745 [2024-11-06 13:25:59.454042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.745 [2024-11-06 13:25:59.454047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.745 [2024-11-06 13:25:59.454053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.745 [2024-11-06 13:25:59.465895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.745 [2024-11-06 13:25:59.466364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.745 [2024-11-06 13:25:59.466378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.745 [2024-11-06 13:25:59.466384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.745 [2024-11-06 13:25:59.466533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.745 [2024-11-06 13:25:59.466682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.745 [2024-11-06 13:25:59.466688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.745 [2024-11-06 13:25:59.466693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.745 [2024-11-06 13:25:59.466698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.745 [2024-11-06 13:25:59.478534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.745 [2024-11-06 13:25:59.479209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.745 [2024-11-06 13:25:59.479239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.745 [2024-11-06 13:25:59.479248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.745 [2024-11-06 13:25:59.479415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.745 [2024-11-06 13:25:59.479568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.745 [2024-11-06 13:25:59.479574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.745 [2024-11-06 13:25:59.479579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.745 [2024-11-06 13:25:59.479585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.745 [2024-11-06 13:25:59.491156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.745 [2024-11-06 13:25:59.491772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.745 [2024-11-06 13:25:59.491802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.491811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.491977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.492128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.492135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.492140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.746 [2024-11-06 13:25:59.492146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.746 [2024-11-06 13:25:59.498909] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:29:17.746 [2024-11-06 13:25:59.498954] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.746 [2024-11-06 13:25:59.503855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.746 [2024-11-06 13:25:59.504341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.746 [2024-11-06 13:25:59.504356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.504362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.504511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.504660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.504667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.504672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.746 [2024-11-06 13:25:59.504677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.746 [2024-11-06 13:25:59.516522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.746 [2024-11-06 13:25:59.516986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.746 [2024-11-06 13:25:59.517000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.517006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.517154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.517303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.517309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.517314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.746 [2024-11-06 13:25:59.517319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.746 [2024-11-06 13:25:59.529161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.746 [2024-11-06 13:25:59.529587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.746 [2024-11-06 13:25:59.529618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.529626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.529799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.529951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.529958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.529964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.746 [2024-11-06 13:25:59.529970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.746 [2024-11-06 13:25:59.541759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.746 [2024-11-06 13:25:59.542109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.746 [2024-11-06 13:25:59.542125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.542130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.542279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.542428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.542434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.542439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.746 [2024-11-06 13:25:59.542444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.746 [2024-11-06 13:25:59.554425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.746 [2024-11-06 13:25:59.555035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.746 [2024-11-06 13:25:59.555066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.555075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.555239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.555392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.555398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.555404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.746 [2024-11-06 13:25:59.555409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.746 [2024-11-06 13:25:59.567118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.746 [2024-11-06 13:25:59.567625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.746 [2024-11-06 13:25:59.567644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.567650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.567803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.567952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.567959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.567964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.746 [2024-11-06 13:25:59.567969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.746 [2024-11-06 13:25:59.579809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.746 [2024-11-06 13:25:59.580276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.746 [2024-11-06 13:25:59.580290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.580296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.580444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.580592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.580599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.580604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.746 [2024-11-06 13:25:59.580609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.746 [2024-11-06 13:25:59.591283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:17.746 [2024-11-06 13:25:59.592444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.746 [2024-11-06 13:25:59.592822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.746 [2024-11-06 13:25:59.592836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.592841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.592989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.593137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.593143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.593148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.746 [2024-11-06 13:25:59.593153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.746 [2024-11-06 13:25:59.605010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.746 [2024-11-06 13:25:59.605524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.746 [2024-11-06 13:25:59.605538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.746 [2024-11-06 13:25:59.605544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.746 [2024-11-06 13:25:59.605696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.746 [2024-11-06 13:25:59.605858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.746 [2024-11-06 13:25:59.605865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.746 [2024-11-06 13:25:59.605870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.747 [2024-11-06 13:25:59.605875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.747 [2024-11-06 13:25:59.617573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.747 [2024-11-06 13:25:59.618062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.747 [2024-11-06 13:25:59.618093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.747 [2024-11-06 13:25:59.618102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.747 [2024-11-06 13:25:59.618270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.747 [2024-11-06 13:25:59.618422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.747 [2024-11-06 13:25:59.618429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.747 [2024-11-06 13:25:59.618434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.747 [2024-11-06 13:25:59.618440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:17.747 [2024-11-06 13:25:59.620531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.747 [2024-11-06 13:25:59.620553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.747 [2024-11-06 13:25:59.620560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.747 [2024-11-06 13:25:59.620565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.747 [2024-11-06 13:25:59.620570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.747 [2024-11-06 13:25:59.621672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.747 [2024-11-06 13:25:59.621806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.747 [2024-11-06 13:25:59.622000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.747 [2024-11-06 13:25:59.630155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:17.747 [2024-11-06 13:25:59.630632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.747 [2024-11-06 13:25:59.630664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:17.747 [2024-11-06 13:25:59.630673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:17.747 [2024-11-06 13:25:59.630847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:17.747 [2024-11-06 13:25:59.631000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:17.747 [2024-11-06 13:25:59.631007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:17.747 [2024-11-06 13:25:59.631013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:17.747 [2024-11-06 13:25:59.631019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.009 [2024-11-06 13:25:59.642733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.009 [2024-11-06 13:25:59.643252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.009 [2024-11-06 13:25:59.643284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.009 [2024-11-06 13:25:59.643293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.009 [2024-11-06 13:25:59.643458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.009 [2024-11-06 13:25:59.643610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.009 [2024-11-06 13:25:59.643616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.009 [2024-11-06 13:25:59.643622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.009 [2024-11-06 13:25:59.643628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.009 [2024-11-06 13:25:59.655340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.009 [2024-11-06 13:25:59.655859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.009 [2024-11-06 13:25:59.655875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.009 [2024-11-06 13:25:59.655881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.009 [2024-11-06 13:25:59.656030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.009 [2024-11-06 13:25:59.656180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.009 [2024-11-06 13:25:59.656185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.009 [2024-11-06 13:25:59.656191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.009 [2024-11-06 13:25:59.656196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.009 [2024-11-06 13:25:59.668036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.009 [2024-11-06 13:25:59.668509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.009 [2024-11-06 13:25:59.668522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.009 [2024-11-06 13:25:59.668528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.009 [2024-11-06 13:25:59.668676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.009 [2024-11-06 13:25:59.668831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.009 [2024-11-06 13:25:59.668837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.009 [2024-11-06 13:25:59.668842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.009 [2024-11-06 13:25:59.668847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.009 [2024-11-06 13:25:59.680681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.009 [2024-11-06 13:25:59.681232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.009 [2024-11-06 13:25:59.681268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.009 [2024-11-06 13:25:59.681277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.009 [2024-11-06 13:25:59.681441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.009 [2024-11-06 13:25:59.681594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.009 [2024-11-06 13:25:59.681600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.009 [2024-11-06 13:25:59.681606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.009 [2024-11-06 13:25:59.681612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.009 [2024-11-06 13:25:59.693319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.009 [2024-11-06 13:25:59.693962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.009 [2024-11-06 13:25:59.693992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.009 [2024-11-06 13:25:59.694001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.009 [2024-11-06 13:25:59.694166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.009 [2024-11-06 13:25:59.694318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.009 [2024-11-06 13:25:59.694325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.009 [2024-11-06 13:25:59.694331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.009 [2024-11-06 13:25:59.694337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.009 [2024-11-06 13:25:59.705934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.009 [2024-11-06 13:25:59.706307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.009 [2024-11-06 13:25:59.706321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.009 [2024-11-06 13:25:59.706327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.009 [2024-11-06 13:25:59.706476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.009 [2024-11-06 13:25:59.706625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.009 [2024-11-06 13:25:59.706631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.009 [2024-11-06 13:25:59.706636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.009 [2024-11-06 13:25:59.706641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.009 [2024-11-06 13:25:59.718529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.009 [2024-11-06 13:25:59.718918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.009 [2024-11-06 13:25:59.718948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.009 [2024-11-06 13:25:59.718957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.009 [2024-11-06 13:25:59.719127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.009 [2024-11-06 13:25:59.719280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.009 [2024-11-06 13:25:59.719286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.009 [2024-11-06 13:25:59.719291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.009 [2024-11-06 13:25:59.719297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.009 [2024-11-06 13:25:59.731151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.009 [2024-11-06 13:25:59.731656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.731687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.731696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.731869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.732021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.732027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.732033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.732038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.743740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.744362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.744393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.744402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.744569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.744720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.744727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.744732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.744738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.756448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.756936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.756951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.756957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.757106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.757254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.757260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.757272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.757277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.769121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.769602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.769616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.769621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.769773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.769922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.769927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.769932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.769938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.781717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.782186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.782200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.782205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.782353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.782501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.782507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.782512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.782517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.794360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.794790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.794820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.794828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.794993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.795145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.795151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.795156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.795161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.807036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.807665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.807695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.807704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.807875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.808027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.808034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.808039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.808045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.819610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.820179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.820209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.820218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.820383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.820534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.820540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.820546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.820551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.832262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.832721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.832757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.832765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.832930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.833082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.833088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.833093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.833099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.844943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.845531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.845561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.845573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.845737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.845896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.845903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.845908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.845914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.857614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.858262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.858292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.858301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.858465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.858616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.858623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.858628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.858634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.870200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.870585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.870615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.870623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.870796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.870947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.010 [2024-11-06 13:25:59.870954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.010 [2024-11-06 13:25:59.870959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.010 [2024-11-06 13:25:59.870965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.010 [2024-11-06 13:25:59.882806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.010 [2024-11-06 13:25:59.883376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.010 [2024-11-06 13:25:59.883407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.010 [2024-11-06 13:25:59.883415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.010 [2024-11-06 13:25:59.883580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.010 [2024-11-06 13:25:59.883735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.011 [2024-11-06 13:25:59.883742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.011 [2024-11-06 13:25:59.883753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.011 [2024-11-06 13:25:59.883759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.011 [2024-11-06 13:25:59.895460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.011 [2024-11-06 13:25:59.895848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.011 [2024-11-06 13:25:59.895878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.011 [2024-11-06 13:25:59.895887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.011 [2024-11-06 13:25:59.896054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.011 [2024-11-06 13:25:59.896205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.011 [2024-11-06 13:25:59.896211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.011 [2024-11-06 13:25:59.896216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.011 [2024-11-06 13:25:59.896222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.272 4700.17 IOPS, 18.36 MiB/s [2024-11-06T12:26:00.174Z] [2024-11-06 13:25:59.908125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.272 [2024-11-06 13:25:59.908571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.272 [2024-11-06 13:25:59.908585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.272 [2024-11-06 13:25:59.908591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.272 [2024-11-06 13:25:59.908740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.272 [2024-11-06 13:25:59.908895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.272 [2024-11-06 13:25:59.908901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.272 [2024-11-06 13:25:59.908906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.272 [2024-11-06 13:25:59.908911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.272 [2024-11-06 13:25:59.920743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.272 [2024-11-06 13:25:59.921326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.272 [2024-11-06 13:25:59.921356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.272 [2024-11-06 13:25:59.921365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.272 [2024-11-06 13:25:59.921528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.272 [2024-11-06 13:25:59.921680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.272 [2024-11-06 13:25:59.921687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.272 [2024-11-06 13:25:59.921696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.272 [2024-11-06 13:25:59.921701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.272 [2024-11-06 13:25:59.933414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.272 [2024-11-06 13:25:59.933793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:25:59.933815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:25:59.933821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:25:59.933976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:25:59.934125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:25:59.934131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:25:59.934136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:25:59.934141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:25:59.945987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:25:59.946571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:25:59.946601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:25:59.946610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:25:59.946781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:25:59.946933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:25:59.946940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:25:59.946945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:25:59.946950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:25:59.958657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:25:59.959212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:25:59.959242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:25:59.959251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:25:59.959415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:25:59.959567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:25:59.959573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:25:59.959578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:25:59.959584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:25:59.971296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:25:59.971759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:25:59.971774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:25:59.971780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:25:59.971928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:25:59.972077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:25:59.972083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:25:59.972088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:25:59.972092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:25:59.983930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:25:59.984509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:25:59.984539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:25:59.984547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:25:59.984714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:25:59.984874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:25:59.984881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:25:59.984887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:25:59.984893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:25:59.996594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:25:59.997221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:25:59.997252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:25:59.997260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:25:59.997425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:25:59.997577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:25:59.997583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:25:59.997589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:25:59.997595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:26:00.009213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:26:00.009470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:26:00.009484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:26:00.009493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:26:00.009642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:26:00.009796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:26:00.009803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:26:00.009808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:26:00.009813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:26:00.021837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:26:00.022305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:26:00.022318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:26:00.022324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:26:00.022472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:26:00.022620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:26:00.022626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:26:00.022631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:26:00.022636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:26:00.034472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:26:00.035124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:26:00.035155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:26:00.035164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:26:00.035333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:26:00.035488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:26:00.035494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:26:00.035500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:26:00.035506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:26:00.047108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:26:00.047513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.273 [2024-11-06 13:26:00.047528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.273 [2024-11-06 13:26:00.047534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.273 [2024-11-06 13:26:00.047683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.273 [2024-11-06 13:26:00.047841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.273 [2024-11-06 13:26:00.047847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.273 [2024-11-06 13:26:00.047853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.273 [2024-11-06 13:26:00.047858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.273 [2024-11-06 13:26:00.059696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.273 [2024-11-06 13:26:00.060166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.274 [2024-11-06 13:26:00.060180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.274 [2024-11-06 13:26:00.060185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.274 [2024-11-06 13:26:00.060334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.274 [2024-11-06 13:26:00.060482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.274 [2024-11-06 13:26:00.060488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.274 [2024-11-06 13:26:00.060493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.274 [2024-11-06 13:26:00.060497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.274 [2024-11-06 13:26:00.072342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.274 [2024-11-06 13:26:00.072762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.274 [2024-11-06 13:26:00.072792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.274 [2024-11-06 13:26:00.072801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.274 [2024-11-06 13:26:00.072967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.274 [2024-11-06 13:26:00.073119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.274 [2024-11-06 13:26:00.073126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.274 [2024-11-06 13:26:00.073131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.274 [2024-11-06 13:26:00.073137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.274 [2024-11-06 13:26:00.084992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.274 [2024-11-06 13:26:00.085593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.274 [2024-11-06 13:26:00.085623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.274 [2024-11-06 13:26:00.085632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.274 [2024-11-06 13:26:00.085803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.274 [2024-11-06 13:26:00.085955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.274 [2024-11-06 13:26:00.085961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.274 [2024-11-06 13:26:00.085971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.274 [2024-11-06 13:26:00.085976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.274 [2024-11-06 13:26:00.097681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.274 [2024-11-06 13:26:00.098185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.274 [2024-11-06 13:26:00.098200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.274 [2024-11-06 13:26:00.098206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.274 [2024-11-06 13:26:00.098355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.274 [2024-11-06 13:26:00.098504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.274 [2024-11-06 13:26:00.098510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.274 [2024-11-06 13:26:00.098515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.274 [2024-11-06 13:26:00.098520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.274 [2024-11-06 13:26:00.110380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.274 [2024-11-06 13:26:00.110877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.274 [2024-11-06 13:26:00.110908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.274 [2024-11-06 13:26:00.110916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.274 [2024-11-06 13:26:00.111083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.274 [2024-11-06 13:26:00.111235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.274 [2024-11-06 13:26:00.111242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.274 [2024-11-06 13:26:00.111247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.274 [2024-11-06 13:26:00.111253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.274 [2024-11-06 13:26:00.122967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.274 [2024-11-06 13:26:00.123528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.274 [2024-11-06 13:26:00.123559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.274 [2024-11-06 13:26:00.123568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.274 [2024-11-06 13:26:00.123733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.274 [2024-11-06 13:26:00.123891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.274 [2024-11-06 13:26:00.123898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.274 [2024-11-06 13:26:00.123904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.274 [2024-11-06 13:26:00.123910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.274 [2024-11-06 13:26:00.135637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.274 [2024-11-06 13:26:00.136178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.274 [2024-11-06 13:26:00.136208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.274 [2024-11-06 13:26:00.136217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.274 [2024-11-06 13:26:00.136384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.274 [2024-11-06 13:26:00.136536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.274 [2024-11-06 13:26:00.136542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.274 [2024-11-06 13:26:00.136548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.274 [2024-11-06 13:26:00.136554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.274 [2024-11-06 13:26:00.148266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.274 [2024-11-06 13:26:00.148612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.274 [2024-11-06 13:26:00.148627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.274 [2024-11-06 13:26:00.148633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.274 [2024-11-06 13:26:00.148787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.274 [2024-11-06 13:26:00.148937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.274 [2024-11-06 13:26:00.148943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.274 [2024-11-06 13:26:00.148948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.274 [2024-11-06 13:26:00.148952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.274 [2024-11-06 13:26:00.160960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.274 [2024-11-06 13:26:00.161507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.274 [2024-11-06 13:26:00.161537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.274 [2024-11-06 13:26:00.161546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.274 [2024-11-06 13:26:00.161711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.274 [2024-11-06 13:26:00.161870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.274 [2024-11-06 13:26:00.161877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.274 [2024-11-06 13:26:00.161882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.274 [2024-11-06 13:26:00.161888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.537 [2024-11-06 13:26:00.173586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.537 [2024-11-06 13:26:00.174090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.537 [2024-11-06 13:26:00.174105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.537 [2024-11-06 13:26:00.174114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.537 [2024-11-06 13:26:00.174263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.537 [2024-11-06 13:26:00.174412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.537 [2024-11-06 13:26:00.174417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.537 [2024-11-06 13:26:00.174422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.537 [2024-11-06 13:26:00.174427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.537 [2024-11-06 13:26:00.186270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.537 [2024-11-06 13:26:00.186754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.537 [2024-11-06 13:26:00.186768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.537 [2024-11-06 13:26:00.186773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.537 [2024-11-06 13:26:00.186921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.537 [2024-11-06 13:26:00.187070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.537 [2024-11-06 13:26:00.187076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.537 [2024-11-06 13:26:00.187080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.537 [2024-11-06 13:26:00.187085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.537 [2024-11-06 13:26:00.198919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.537 [2024-11-06 13:26:00.199408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.537 [2024-11-06 13:26:00.199420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.537 [2024-11-06 13:26:00.199425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.537 [2024-11-06 13:26:00.199573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.537 [2024-11-06 13:26:00.199721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.537 [2024-11-06 13:26:00.199727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.537 [2024-11-06 13:26:00.199732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.537 [2024-11-06 13:26:00.199737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.537 [2024-11-06 13:26:00.211588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.537 [2024-11-06 13:26:00.211904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.537 [2024-11-06 13:26:00.211917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.537 [2024-11-06 13:26:00.211922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.537 [2024-11-06 13:26:00.212070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.537 [2024-11-06 13:26:00.212221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.537 [2024-11-06 13:26:00.212228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.537 [2024-11-06 13:26:00.212234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.537 [2024-11-06 13:26:00.212239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.537 [2024-11-06 13:26:00.224226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.537 [2024-11-06 13:26:00.224715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.537 [2024-11-06 13:26:00.224753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.537 [2024-11-06 13:26:00.224762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.537 [2024-11-06 13:26:00.224927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.537 [2024-11-06 13:26:00.225079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.537 [2024-11-06 13:26:00.225085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.537 [2024-11-06 13:26:00.225091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.537 [2024-11-06 13:26:00.225096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.537 [2024-11-06 13:26:00.236800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.537 [2024-11-06 13:26:00.237271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.537 [2024-11-06 13:26:00.237302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.537 [2024-11-06 13:26:00.237311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.538 [2024-11-06 13:26:00.237478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.538 [2024-11-06 13:26:00.237630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.538 [2024-11-06 13:26:00.237636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.538 [2024-11-06 13:26:00.237642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.538 [2024-11-06 13:26:00.237648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.538 [2024-11-06 13:26:00.249501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.538 [2024-11-06 13:26:00.250086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.538 [2024-11-06 13:26:00.250117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.538 [2024-11-06 13:26:00.250126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.538 [2024-11-06 13:26:00.250291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.538 [2024-11-06 13:26:00.250443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.538 [2024-11-06 13:26:00.250450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.538 [2024-11-06 13:26:00.250460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.538 [2024-11-06 13:26:00.250466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.538 [2024-11-06 13:26:00.262173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.538 [2024-11-06 13:26:00.262801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.538 [2024-11-06 13:26:00.262831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.538 [2024-11-06 13:26:00.262840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.538 [2024-11-06 13:26:00.263008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.538 [2024-11-06 13:26:00.263159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.538 [2024-11-06 13:26:00.263167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.538 [2024-11-06 13:26:00.263173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.538 [2024-11-06 13:26:00.263178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.538 [2024-11-06 13:26:00.274740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.538 [2024-11-06 13:26:00.275062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.538 [2024-11-06 13:26:00.275077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.538 [2024-11-06 13:26:00.275083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.538 [2024-11-06 13:26:00.275231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.538 [2024-11-06 13:26:00.275380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.538 [2024-11-06 13:26:00.275385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.538 [2024-11-06 13:26:00.275390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.538 [2024-11-06 13:26:00.275395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.538 [2024-11-06 13:26:00.287369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.538 [2024-11-06 13:26:00.287842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.538 [2024-11-06 13:26:00.287855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.538 [2024-11-06 13:26:00.287860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.538 [2024-11-06 13:26:00.288009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.538 [2024-11-06 13:26:00.288157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.538 [2024-11-06 13:26:00.288163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.538 [2024-11-06 13:26:00.288168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.538 [2024-11-06 13:26:00.288173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.538 [2024-11-06 13:26:00.300009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.538 [2024-11-06 13:26:00.300487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.538 [2024-11-06 13:26:00.300500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.538 [2024-11-06 13:26:00.300505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.538 [2024-11-06 13:26:00.300653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.538 [2024-11-06 13:26:00.300806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.538 [2024-11-06 13:26:00.300812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.538 [2024-11-06 13:26:00.300817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.538 [2024-11-06 13:26:00.300822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.538 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:18.538 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:18.538 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:18.538 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:18.538 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.538 [2024-11-06 13:26:00.312672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.538 [2024-11-06 13:26:00.313281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.538 [2024-11-06 13:26:00.313313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.538 [2024-11-06 13:26:00.313321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.538 [2024-11-06 13:26:00.313486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.538 [2024-11-06 13:26:00.313638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.538 [2024-11-06 13:26:00.313644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.538 [2024-11-06 13:26:00.313650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.538 [2024-11-06 13:26:00.313655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.538 [2024-11-06 13:26:00.325368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.538 [2024-11-06 13:26:00.326012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.538 [2024-11-06 13:26:00.326042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.538 [2024-11-06 13:26:00.326051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.538 [2024-11-06 13:26:00.326216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.538 [2024-11-06 13:26:00.326368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.538 [2024-11-06 13:26:00.326374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.538 [2024-11-06 13:26:00.326380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.538 [2024-11-06 13:26:00.326386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.538 [2024-11-06 13:26:00.337963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.538 [2024-11-06 13:26:00.338546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.538 [2024-11-06 13:26:00.338576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.538 [2024-11-06 13:26:00.338585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.538 [2024-11-06 13:26:00.338756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.538 [2024-11-06 13:26:00.338908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.538 [2024-11-06 13:26:00.338915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.538 [2024-11-06 13:26:00.338920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.538 [2024-11-06 13:26:00.338927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.538 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.539 [2024-11-06 13:26:00.350205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.539 [2024-11-06 13:26:00.350634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.539 [2024-11-06 13:26:00.351299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.539 [2024-11-06 13:26:00.351330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.539 [2024-11-06 13:26:00.351338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.539 [2024-11-06 13:26:00.351503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.539 [2024-11-06 13:26:00.351655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.539 [2024-11-06 13:26:00.351662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.539 [2024-11-06 13:26:00.351667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.539 [2024-11-06 13:26:00.351673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.539 [2024-11-06 13:26:00.363241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.539 [2024-11-06 13:26:00.363885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.539 [2024-11-06 13:26:00.363916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.539 [2024-11-06 13:26:00.363924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.539 [2024-11-06 13:26:00.364098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.539 [2024-11-06 13:26:00.364249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.539 [2024-11-06 13:26:00.364256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.539 [2024-11-06 13:26:00.364261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.539 [2024-11-06 13:26:00.364267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.539 [2024-11-06 13:26:00.375840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.539 [2024-11-06 13:26:00.376498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.539 [2024-11-06 13:26:00.376529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.539 [2024-11-06 13:26:00.376538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.539 [2024-11-06 13:26:00.376705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.539 [2024-11-06 13:26:00.376866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.539 [2024-11-06 13:26:00.376873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.539 [2024-11-06 13:26:00.376879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.539 [2024-11-06 13:26:00.376885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.539 Malloc0 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.539 [2024-11-06 13:26:00.388449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.539 [2024-11-06 13:26:00.389070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.539 [2024-11-06 13:26:00.389100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.539 [2024-11-06 13:26:00.389109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.539 [2024-11-06 13:26:00.389274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.539 [2024-11-06 13:26:00.389425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.539 [2024-11-06 13:26:00.389432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.539 [2024-11-06 13:26:00.389438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.539 [2024-11-06 13:26:00.389444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.539 [2024-11-06 13:26:00.401156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.539 [2024-11-06 13:26:00.401613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.539 [2024-11-06 13:26:00.401628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.539 [2024-11-06 13:26:00.401634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.539 [2024-11-06 13:26:00.401786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.539 [2024-11-06 13:26:00.401935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.539 [2024-11-06 13:26:00.401941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.539 [2024-11-06 13:26:00.401947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.539 [2024-11-06 13:26:00.401952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.539 [2024-11-06 13:26:00.413804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.539 [2024-11-06 13:26:00.414388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.539 [2024-11-06 13:26:00.414418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a8280 with addr=10.0.0.2, port=4420 00:29:18.539 [2024-11-06 13:26:00.414427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a8280 is same with the state(6) to be set 00:29:18.539 [2024-11-06 13:26:00.414457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.539 [2024-11-06 13:26:00.414592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a8280 (9): Bad file descriptor 00:29:18.539 [2024-11-06 13:26:00.414752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:18.539 [2024-11-06 13:26:00.414759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:18.539 [2024-11-06 13:26:00.414764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:18.539 [2024-11-06 13:26:00.414770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.539 13:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1909860 00:29:18.539 [2024-11-06 13:26:00.426469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:18.800 [2024-11-06 13:26:00.453518] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:20.061 4844.14 IOPS, 18.92 MiB/s [2024-11-06T12:26:02.942Z] 5864.00 IOPS, 22.91 MiB/s [2024-11-06T12:26:04.324Z] 6655.67 IOPS, 26.00 MiB/s [2024-11-06T12:26:05.260Z] 7282.70 IOPS, 28.45 MiB/s [2024-11-06T12:26:06.194Z] 7796.27 IOPS, 30.45 MiB/s [2024-11-06T12:26:07.134Z] 8226.42 IOPS, 32.13 MiB/s [2024-11-06T12:26:08.073Z] 8596.38 IOPS, 33.58 MiB/s [2024-11-06T12:26:09.015Z] 8908.64 IOPS, 34.80 MiB/s 00:29:27.113 Latency(us) 00:29:27.113 [2024-11-06T12:26:09.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.113 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:27.113 Verification LBA range: start 0x0 length 0x4000 00:29:27.113 Nvme1n1 : 15.00 9182.10 35.87 13488.83 0.00 5627.19 703.15 14308.69 00:29:27.113 [2024-11-06T12:26:09.015Z] =================================================================================================================== 00:29:27.113 [2024-11-06T12:26:09.015Z] Total : 9182.10 35.87 13488.83 0.00 5627.19 703.15 14308.69 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.372 rmmod nvme_tcp 00:29:27.372 rmmod nvme_fabrics 00:29:27.372 rmmod nvme_keyring 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1911023 ']' 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1911023 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 1911023 ']' 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 1911023 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1911023 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1911023' 00:29:27.372 killing process with pid 1911023 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 1911023 00:29:27.372 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 1911023 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.633 13:26:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.539 13:26:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.539 00:29:29.539 real 0m28.498s 00:29:29.539 user 1m3.767s 00:29:29.539 sys 0m7.782s 00:29:29.539 13:26:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:29.539 13:26:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:29.539 ************************************ 00:29:29.539 END TEST nvmf_bdevperf 00:29:29.539 ************************************ 00:29:29.539 13:26:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:29.539 13:26:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:29.539 13:26:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:29.539 13:26:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.801 ************************************ 00:29:29.801 START TEST nvmf_target_disconnect 00:29:29.801 ************************************ 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:29.801 * Looking for test storage... 00:29:29.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:29.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.801 --rc genhtml_branch_coverage=1 00:29:29.801 --rc genhtml_function_coverage=1 00:29:29.801 --rc genhtml_legend=1 00:29:29.801 --rc geninfo_all_blocks=1 00:29:29.801 --rc geninfo_unexecuted_blocks=1 00:29:29.801 00:29:29.801 ' 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:29.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.801 --rc genhtml_branch_coverage=1 00:29:29.801 --rc genhtml_function_coverage=1 00:29:29.801 --rc genhtml_legend=1 00:29:29.801 --rc geninfo_all_blocks=1 00:29:29.801 --rc geninfo_unexecuted_blocks=1 00:29:29.801 00:29:29.801 ' 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:29.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.801 --rc genhtml_branch_coverage=1 00:29:29.801 --rc genhtml_function_coverage=1 00:29:29.801 --rc genhtml_legend=1 00:29:29.801 --rc geninfo_all_blocks=1 00:29:29.801 --rc geninfo_unexecuted_blocks=1 00:29:29.801 00:29:29.801 ' 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:29.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.801 --rc genhtml_branch_coverage=1 00:29:29.801 --rc genhtml_function_coverage=1 00:29:29.801 --rc genhtml_legend=1 00:29:29.801 --rc geninfo_all_blocks=1 00:29:29.801 --rc geninfo_unexecuted_blocks=1 00:29:29.801 00:29:29.801 ' 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.801 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.802 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.063 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.063 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.063 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.063 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.063 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.063 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.063 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.063 13:26:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:38.201 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:38.202 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:38.202 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:38.202 Found net devices under 0000:31:00.0: cvl_0_0 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:38.202 Found net devices under 0000:31:00.1: cvl_0_1 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.202 13:26:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:29:38.202 00:29:38.202 --- 10.0.0.2 ping statistics --- 00:29:38.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.202 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:29:38.202 00:29:38.202 --- 10.0.0.1 ping statistics --- 00:29:38.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.202 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:38.202 ************************************ 00:29:38.202 START TEST nvmf_target_disconnect_tc1 00:29:38.202 ************************************ 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:38.202 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.203 [2024-11-06 13:26:19.528998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.203 [2024-11-06 13:26:19.529094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a5f60 with addr=10.0.0.2, port=4420 00:29:38.203 [2024-11-06 13:26:19.529129] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:38.203 [2024-11-06 13:26:19.529143] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:38.203 [2024-11-06 13:26:19.529155] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:38.203 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:38.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:38.203 Initializing NVMe Controllers 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:38.203 00:29:38.203 real 0m0.148s 00:29:38.203 user 0m0.068s 00:29:38.203 sys 0m0.081s 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:38.203 ************************************ 00:29:38.203 END TEST nvmf_target_disconnect_tc1 00:29:38.203 ************************************ 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:38.203 ************************************ 00:29:38.203 START TEST nvmf_target_disconnect_tc2 00:29:38.203 ************************************ 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1917279 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1917279 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1917279 ']' 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:38.203 13:26:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.203 [2024-11-06 13:26:19.697982] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:29:38.203 [2024-11-06 13:26:19.698041] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.203 [2024-11-06 13:26:19.797779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.203 [2024-11-06 13:26:19.850739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.203 [2024-11-06 13:26:19.850797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.203 [2024-11-06 13:26:19.850805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.203 [2024-11-06 13:26:19.850812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.203 [2024-11-06 13:26:19.850819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.203 [2024-11-06 13:26:19.853098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:38.203 [2024-11-06 13:26:19.853263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:38.203 [2024-11-06 13:26:19.853423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:38.203 [2024-11-06 13:26:19.853424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.775 Malloc0 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.775 [2024-11-06 13:26:20.608443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.775 [2024-11-06 13:26:20.648897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1917336 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:38.775 13:26:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:41.356 13:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1917279 00:29:41.356 13:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Write completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Write completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Write completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Write completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Write completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Read completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 Write completed with error (sct=0, sc=8) 00:29:41.356 starting I/O failed 00:29:41.356 [2024-11-06 13:26:22.687696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:41.356 [2024-11-06 13:26:22.688268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.356 [2024-11-06 13:26:22.688338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.356 qpair failed and we were unable to recover it. 00:29:41.356 [2024-11-06 13:26:22.688639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.356 [2024-11-06 13:26:22.688652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.356 qpair failed and we were unable to recover it. 00:29:41.356 [2024-11-06 13:26:22.688986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.356 [2024-11-06 13:26:22.689039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.356 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.689436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.689451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.689670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.689682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.690104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.690159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.690529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.690545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.690988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.691044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.691391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.691405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.691719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.691731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.692096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.692108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.692472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.692486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.692999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.693053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.693360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.693375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.693709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.693721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.694064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.694076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.694381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.694393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.694738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.694755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.695130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.695144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.695492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.695503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.695824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.695837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.696051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.696063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.696298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.696309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.696700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.696711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.697064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.697076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.697430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.697443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.697676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.697689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.698020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.698032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.698224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.698236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.698421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.698433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.698777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.698790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.699143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.699154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.699503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.699517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.699821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.699837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.700196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.700208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.700553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.700565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.700923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.700936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.701141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.701152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.701453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.701466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.701685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.701697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.701998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.357 [2024-11-06 13:26:22.702011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.357 qpair failed and we were unable to recover it. 00:29:41.357 [2024-11-06 13:26:22.702226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.702237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.702495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.702506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.702855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.702866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.703197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.703208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.703524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.703535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.703848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.703859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.704179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.704189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.704523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.704534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.704956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.704967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.705281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.705291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.705491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.705503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.705751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.705762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.706109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.706119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.706441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.706453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.706813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.706824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.707211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.707221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.707579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.707590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.707904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.707915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.708223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.708234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.708566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.708581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.708903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.708914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.709197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.709207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.709415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.709425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.709756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.709767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.709980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.709989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.710352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.710364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.710724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.710736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.711069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.711080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.711418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.711430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.711798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.711810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.712166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.712176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.712531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.712540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.712914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.712928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.713253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.713267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.713572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.713586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.713919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.713933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.714168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.714180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.714527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.358 [2024-11-06 13:26:22.714541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.358 qpair failed and we were unable to recover it. 00:29:41.358 [2024-11-06 13:26:22.714876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.714889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.715212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.715224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.715409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.715425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.715618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.715632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.715907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.715920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.716241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.716253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.716642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.716656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.717017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.717030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.717324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.717337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.717680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.717695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.717943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.717956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.718296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.718309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.718625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.718637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.718977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.718990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.719389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.719403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.719735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.719767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.720119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.720133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.720329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.720343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.720639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.720652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.720959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.720973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.721289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.721301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.721474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.721490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.721823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.721836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.722181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.722195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.722512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.722525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.722812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.722826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.723164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.723177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.723396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.723409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.723769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.723782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.724100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.724116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.724473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.724493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.724824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.724842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.725192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.725210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.725534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.725551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.725908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.725926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.726262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.726279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.726619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.726640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.726978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.726996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.727302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.359 [2024-11-06 13:26:22.727319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.359 qpair failed and we were unable to recover it. 00:29:41.359 [2024-11-06 13:26:22.727641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.727658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.727992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.728010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.728230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.728249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.728618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.728637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.728976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.728994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.729328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.729346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.729682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.729701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.730096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.730114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.730456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.730474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.730790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.730808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.731208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.731226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.731547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.731564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.731948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.731971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.732296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.732314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.732628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.732645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.733041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.733058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.733402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.733420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.733810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.733828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.734103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.734119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.734505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.734526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.734764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.734790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.735168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.735188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.735519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.735548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.735876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.735899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.736271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.736292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.736636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.736657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.737025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.737046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.737265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.737288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.737642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.737665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.737868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.737892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.738210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.738231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.738569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.738590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.738957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.738979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.739355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.739376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.739589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.739610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.739956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.739978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.740361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.740385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.740590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.740614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.360 qpair failed and we were unable to recover it. 00:29:41.360 [2024-11-06 13:26:22.740856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.360 [2024-11-06 13:26:22.740879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.741246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.741267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.741604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.741626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.742009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.742031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.742394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.742417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.742762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.742786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.743103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.743123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.743447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.743469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.743812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.743834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.744186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.744207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.744540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.744563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.744906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.744930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.745284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.745306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.745508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.745532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.745868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.745890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.746266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.746296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.746540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.746572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.746929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.746959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.747358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.747388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.747805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.747841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.748207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.748237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.748648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.748677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.749063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.749094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.749451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.749479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.749713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.749760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.750107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.750137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.750484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.750513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.750866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.750896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.751221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.751249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.751600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.751629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.751979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.752009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.752412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.361 [2024-11-06 13:26:22.752440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.361 qpair failed and we were unable to recover it. 00:29:41.361 [2024-11-06 13:26:22.752793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.752825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.753190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.753219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.753589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.753618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.753966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.753998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.754333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.754362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.754726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.754766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.755174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.755203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.755566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.755596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.756045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.756075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.756421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.756450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.756847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.756878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.757227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.757255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.757652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.757680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.758048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.758078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.758425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.758455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.758833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.758863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.759224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.759253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.759618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.759646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.760017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.760047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.760406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.760436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.760826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.760857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.761204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.761234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.761565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.761594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.761931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.761960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.762308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.762337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.762695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.762725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.763084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.763115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.763469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.763498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.763858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.763889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.764285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.764314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.764717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.764761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.765159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.765188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.765565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.765598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.765812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.765843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.766019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.766051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.766302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.766329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.766691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.766720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.767111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.362 [2024-11-06 13:26:22.767141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.362 qpair failed and we were unable to recover it. 00:29:41.362 [2024-11-06 13:26:22.767522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.767551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.767919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.767950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.768321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.768350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.768796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.768827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.769189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.769218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.769583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.769613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.769961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.769991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.770365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.770395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.770771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.770802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.771130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.771159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.771522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.771550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.771924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.771954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.772339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.772368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.772735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.772788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.773155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.773184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.773557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.773585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.773951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.773981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.774273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.774302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.774741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.774783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.775013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.775043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.775427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.775456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.775763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.775795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.776162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.776192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.776520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.776550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.776926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.776957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.777211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.777240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.777513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.777542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.777798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.777828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.778083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.778111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.778482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.778510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.778878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.778909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.779274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.779303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.779678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.779706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.780117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.780146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.780486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.780520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.780898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.780929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.781299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.781329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.781700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.781729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.363 qpair failed and we were unable to recover it. 00:29:41.363 [2024-11-06 13:26:22.782085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.363 [2024-11-06 13:26:22.782113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.782488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.782516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.782895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.782927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.783258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.783287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.783653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.783682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.784035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.784065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.784443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.784471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.784821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.784852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.785222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.785251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.785459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.785489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.785861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.785892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.786218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.786246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.786575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.786604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.787011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.787041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.787407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.787435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.787788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.787819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.788187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.788216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.788564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.788592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.788951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.788981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.789396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.789425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.789691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.789719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.790176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.790206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.790428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.790458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.790827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.790858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.791230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.791258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.791512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.791544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.791929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.791959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.792310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.792340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.792703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.792733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.793048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.793078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.793413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.793442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.793805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.793836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.794086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.794118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.794501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.794530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.794847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.794877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.795237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.795266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.795500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.795536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.795797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.795828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.364 [2024-11-06 13:26:22.796296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.364 [2024-11-06 13:26:22.796325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.364 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.796672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.796701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.797050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.797080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.797441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.797470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.797823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.797853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.798229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.798258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.798380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.798411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.798776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.798809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.799176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.799205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.799619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.799648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.799974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.800005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.800314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.800342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.800706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.800737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.801084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.801114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.801494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.801522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.801888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.801918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.802262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.802292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.802656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.802684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.803040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.803071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.803322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.803351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.803774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.803805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.804086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.804115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.804469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.804497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.804865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.804895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.805259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.805289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.805638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.805668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.806047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.806078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.806440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.806469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.806830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.806862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.807112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.807140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.807497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.807527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.807871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.807901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.808279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.808307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.808675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.808705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.365 qpair failed and we were unable to recover it. 00:29:41.365 [2024-11-06 13:26:22.809101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.365 [2024-11-06 13:26:22.809131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.809502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.809532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.809880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.809910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.810260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.810288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.810656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.810691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.811112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.811144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.811474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.811502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.811884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.811915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.812259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.812288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.812539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.812571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.812929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.812959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.813326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.813354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.813680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.813709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.814087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.814117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.814479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.814508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.814895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.814925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.815297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.815325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.815703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.815731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.816129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.816158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.816523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.816554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.816923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.816955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.817294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.817323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.817685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.817714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.818079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.818108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.818454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.818483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.818852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.818883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.819262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.819291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.819671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.819700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.820070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.820099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.820334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.820366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.820765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.820796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.821237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.821266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.821601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.821632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.821983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.822015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.822381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.822413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.822766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.366 [2024-11-06 13:26:22.822795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.366 qpair failed and we were unable to recover it. 00:29:41.366 [2024-11-06 13:26:22.823154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.823183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.823525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.823554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.823899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.823932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.824268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.824296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.824624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.824652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.824982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.825012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.825386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.825415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.825789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.825819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.826196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.826230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.826558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.826588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.826843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.826876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.827079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.827108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.827381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.827410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.827836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.827866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.828134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.828162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.828560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.828588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.829005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.829035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.829274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.829306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.829551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.829580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.829943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.829974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.830293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.830322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.830533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.830561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.830954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.830986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.831305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.831334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.831649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.831677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.832031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.832061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.832378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.832408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.832763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.832795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.833137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.833166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.833532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.833562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.833902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.833934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.834303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.834332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.834693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.834721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.835081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.835111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.835477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.835506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.835908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.835947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.836199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.367 [2024-11-06 13:26:22.836230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.367 qpair failed and we were unable to recover it. 00:29:41.367 [2024-11-06 13:26:22.836630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.836659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.837028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.837057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.837404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.837433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.837794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.837825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.838192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.838221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.838583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.838611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.838875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.838904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.839261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.839290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.839663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.839692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.840123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.840154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.840512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.840540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.840828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.840863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.841277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.841307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.841665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.841693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.842064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.842094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.842466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.842496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.842870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.842902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.843248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.843277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.843725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.843778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.844161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.844191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.844436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.844465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.844889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.844919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.845298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.845327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.845658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.845687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.846066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.846095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.846452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.846482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.846822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.846853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.847225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.847253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.847614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.847644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.847902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.847932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.848266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.848294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.848665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.848693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.849075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.849105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.849447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.849476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.849846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.849879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.850255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.850285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.850627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.850657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.850918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.850947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.368 [2024-11-06 13:26:22.851310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.368 [2024-11-06 13:26:22.851341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.368 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.851726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.851775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.852157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.852186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.852580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.852608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.852860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.852890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.853260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.853289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.853537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.853569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.853900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.853929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.854348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.854378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.854739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.854781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.855113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.855141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.855501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.855529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.855917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.855949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.856312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.856348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.856705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.856734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.857079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.857108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.857485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.857514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.857882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.857912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.858144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.858177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.858578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.858606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.858986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.859024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.859366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.859396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.859727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.859783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.860076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.860105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.860475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.860503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.860883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.860914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.861275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.861305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.861649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.861677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.862057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.862087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.862396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.862427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.862666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.862694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.863010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.863039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.863400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.863429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.863788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.863818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.864187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.864215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.864584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.864614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.865003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.865032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.865399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.865430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.865784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.369 [2024-11-06 13:26:22.865817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.369 qpair failed and we were unable to recover it. 00:29:41.369 [2024-11-06 13:26:22.866197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.866226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.866587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.866616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.866967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.866999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.867377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.867405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.867787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.867820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.868215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.868244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.868607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.868636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.868979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.869009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.869375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.869403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.869650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.869682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.870034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.870065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.870441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.870471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.870928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.870958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.871313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.871341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.871680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.871716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.872124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.872155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.872597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.872629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.872989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.873020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.873378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.873407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.873767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.873798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.874140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.874171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.874532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.874561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.874908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.874941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.875330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.875359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.875732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.875785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.876141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.876170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.876519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.876548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.876895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.876926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.877300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.877330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.877683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.877713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.878150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.878180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.878553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.878580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.370 [2024-11-06 13:26:22.878867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.370 [2024-11-06 13:26:22.878897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.370 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.879271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.879300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.879635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.879664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.880046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.880076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.880439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.880467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.880824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.880854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.881274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.881302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.881665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.881694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.882041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.882069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.882434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.882465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.882803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.882831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.883166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.883194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.883531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.883559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.883923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.883954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.884287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.884315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.884695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.884723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.884901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.884931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.885286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.885315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.885540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.885572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.885773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.885803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.886169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.886197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.886564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.886595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.886871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.886908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.887267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.887297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.887665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.887696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.888127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.888159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.888515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.888545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.888929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.888961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.889303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.889333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.889668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.889699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.890099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.890130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.890499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.890530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.890897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.890931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.891290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.891320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.891669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.891699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.892151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.892182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.892449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.892479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.892828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.371 [2024-11-06 13:26:22.892860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.371 qpair failed and we were unable to recover it. 00:29:41.371 [2024-11-06 13:26:22.893197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.893227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.893599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.893630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.894009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.894039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.894397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.894425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.894780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.894811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.895188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.895217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.895601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.895632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.895893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.895924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.896286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.896315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.896645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.896674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.897057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.897087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.897445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.897480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.897894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.897925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.898284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.898315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.898692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.898722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.899055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.899085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.899459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.899487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.899830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.899861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.900241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.900271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.900636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.900665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.900942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.900972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.901303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.901331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.901670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.901699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.901959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.901989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.902340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.902369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.902760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.902792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.903128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.903157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.903385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.903414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.903807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.903839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.904207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.904236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.904601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.904630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.905007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.905037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.905414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.905445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.905787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.905817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.906211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.906241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.906595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.906624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.906907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.906937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.907320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.907349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.372 qpair failed and we were unable to recover it. 00:29:41.372 [2024-11-06 13:26:22.907719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.372 [2024-11-06 13:26:22.907764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.908125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.908154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.908501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.908531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.908934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.908965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.909208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.909237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.909544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.909574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.909910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.909941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.910110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.910141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.910523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.910552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.910929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.910959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.911250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.911278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.911678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.911708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.912051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.912084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.912323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.912359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.912631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.912659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.912843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.912875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.913118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.913146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.913497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.913525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.913901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.913931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.914318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.914348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.914592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.914621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.914970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.915000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.915343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.915372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.915621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.915650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.916029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.916059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.916366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.916394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.916604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.916637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.917002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.917033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.917388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.917417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.917766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.917797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.918185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.918214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.918372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.918400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.918650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.918683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.919059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.919090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.919337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.919365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.919696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.919724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.919979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.920009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.920379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.920408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.920769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.373 [2024-11-06 13:26:22.920803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.373 qpair failed and we were unable to recover it. 00:29:41.373 [2024-11-06 13:26:22.921057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.921086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.921465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.921495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.921878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.921909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.922268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.922297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.922644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.922673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.923040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.923071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.923322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.923353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.923713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.923743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.924100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.924130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.924504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.924533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.924900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.924931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.925267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.925297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.925595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.925624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.925842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.925873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.925996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.926030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.926404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.926434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.926639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.926668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.927006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.927036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.927401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.927431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.927786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.927820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.928194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.928224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.928585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.928614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.928877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.928907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.929285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.929313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.929674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.929704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.929949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.929982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.930334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.930372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.930599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.930628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.930998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.931029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.931384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.931412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.931580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.931609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.931858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.931888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.932139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.932172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.932554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.374 [2024-11-06 13:26:22.932585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.374 qpair failed and we were unable to recover it. 00:29:41.374 [2024-11-06 13:26:22.932766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.932799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.933037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.933071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.933424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.933453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.933832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.933864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.934224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.934254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.934581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.934609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.934984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.935017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.935392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.935421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.935789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.935821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.936194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.936223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.936602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.936632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.936994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.937024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.937376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.937405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.937697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.937726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.938005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.938035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.938395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.938425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.938785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.938815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.939183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.939211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.939500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.939530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.939993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.940031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.940399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.940434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.940859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.940890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.941136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.941164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.941543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.941573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.941982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.942013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.942343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.942372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.942741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.942785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.943145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.943174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.943542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.943570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.943959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.943990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.944265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.944293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.944661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.944692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.945036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.945067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.945403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.945433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.945693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.945722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.946072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.946103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.946471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.946502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.375 [2024-11-06 13:26:22.946832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.375 [2024-11-06 13:26:22.946863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.375 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.947296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.947326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.947617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.947648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.947899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.947930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.948296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.948325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.948694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.948723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.949156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.949186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.949563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.949594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.949833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.949863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.950184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.950212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.950583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.950613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.951006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.951037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.951376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.951406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.951653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.951686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.952089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.952121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.952511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.952541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.952896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.952927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.953287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.953316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.953691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.953721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.954113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.954142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.954501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.954533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.954874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.954904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.955273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.955302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.955674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.955716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.956002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.956031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.956417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.956446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.956801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.956832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.957211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.957239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.957597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.957626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.957920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.957949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.958350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.958381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.958739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.958783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.959160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.959189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.959491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.959520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.959874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.959906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.960287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.960316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.960670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.960699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.961049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.961081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.961431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.376 [2024-11-06 13:26:22.961461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.376 qpair failed and we were unable to recover it. 00:29:41.376 [2024-11-06 13:26:22.961877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.961909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.962280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.962309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.962681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.962709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.963116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.963146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.963498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.963527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.963867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.963898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.964266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.964295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.964662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.964690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.965108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.965139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.965521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.965549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.965796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.965830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.966210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.966241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.966485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.966516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.966874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.966905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.967271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.967300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.967662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.967691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.968052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.968083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.968448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.968477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.968727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.968770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.969121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.969152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.969510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.969540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.969927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.969957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.970349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.970377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.970718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.970758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.971101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.971138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.971478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.971507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.971909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.971939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.972324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.972352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.972722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.972779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.973173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.973202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.973517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.973546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.973905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.973936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.974210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.974238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.974638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.974667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.975000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.975030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.975399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.975429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.975771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.975802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.976173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.976201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.377 qpair failed and we were unable to recover it. 00:29:41.377 [2024-11-06 13:26:22.976609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.377 [2024-11-06 13:26:22.976638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.976979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.977010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.977390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.977418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.977787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.977820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.978197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.978225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.978488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.978516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.978872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.978902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.979263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.979292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.979660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.979688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.979936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.979970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.980317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.980345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.980715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.980755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.981001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.981034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.981396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.981425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.981792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.981824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.982229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.982258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.982617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.982646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.983114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.983144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.983579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.983608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.983941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.983971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.984334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.984362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.984728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.984770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.985201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.985230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.985468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.985496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.985876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.985906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.986168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.986196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.986560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.986597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.987008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.987038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.987398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.987427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.987803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.987833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.988193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.988222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.988598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.988627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.989008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.989038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.989398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.989428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.989793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.989822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.990081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.990110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.990459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.990487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.990822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.990852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.378 [2024-11-06 13:26:22.991103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.378 [2024-11-06 13:26:22.991135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.378 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.991500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.991531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.991886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.991917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.992277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.992307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.992663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.992692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.993084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.993114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.993477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.993507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.993860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.993890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.994125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.994156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.994530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.994559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.994903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.994935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.995279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.995307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.995670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.995700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.996087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.996118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.996482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.996511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.996877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.996909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.997152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.997181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.997535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.997564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.997907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.997938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.998314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.998344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.998708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.998736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.999101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.999131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.999535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.999564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:22.999915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:22.999945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.000159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.000192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.000567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.000598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.000973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.001003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.001263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.001292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.001525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.001563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.001987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.002018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.002251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.002283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.002645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.002676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.003098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.003129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.003483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.003513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.003898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.379 [2024-11-06 13:26:23.003928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.379 qpair failed and we were unable to recover it. 00:29:41.379 [2024-11-06 13:26:23.004308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.004337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.004711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.004740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.004987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.005019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.005368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.005398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.005758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.005790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.006000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.006029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.006409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.006438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.006677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.006710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.007095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.007125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.007462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.007492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.007870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.007902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.008259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.008288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.008658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.008686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.009052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.009083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.009431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.009460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.009829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.009862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.010216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.010246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.010606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.010635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.010899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.010928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.011291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.011319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.011692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.011724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.012121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.012152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.012519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.012549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.012791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.012824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.013298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.013327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.013693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.013724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.013992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.014022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.014371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.014401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.014775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.014806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.015197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.015227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.015458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.015487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.015838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.015868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.016247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.016276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.016514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.016552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.016918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.016951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.017326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.017356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.017722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.017762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.380 [2024-11-06 13:26:23.018145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.380 [2024-11-06 13:26:23.018174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.380 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.018503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.018533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.018896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.018927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.019155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.019186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.019541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.019569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.019942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.019973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.020218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.020250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.020637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.020668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.020893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.020927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.021223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.021608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.021638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.022060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.022090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.022445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.022475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.022765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.022797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.023180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.023209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.023569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.023599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.023984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.024014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.024377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.024406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.024776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.024806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.025183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.025214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.025573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.025603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.025971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.026001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.026363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.026392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.026650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.026682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.027080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.027110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.027466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.027497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.027768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.027799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.028169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.028199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.028553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.028582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.028965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.028996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.029254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.029284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.029642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.029673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.030018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.030049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.030411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.030441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.030805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.030835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.031208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.031240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.031651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.031693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.032079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.032111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.032334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.381 [2024-11-06 13:26:23.032365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.381 qpair failed and we were unable to recover it. 00:29:41.381 [2024-11-06 13:26:23.032757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.032788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.033123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.033153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.033506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.033536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.033900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.033932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.034296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.034325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.034676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.034706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.035073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.035103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.035460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.035489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.035852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.035882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.036186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.036215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.036486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.036514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.036862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.036893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.037263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.037292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.037654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.037682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.038126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.038157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.038511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.038540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.038905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.038936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.039379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.039408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.039656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.039688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.040065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.040096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.040352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.040380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.040645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.040678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.040927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.040957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.041315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.041346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.041731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.041775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.042134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.042163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.042520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.042549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.042895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.042925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.043300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.043330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.043686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.043716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.044107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.044137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.044510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.044539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.044909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.044939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.045389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.045418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.045660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.045693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.046035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.046066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.046389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.046418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.046780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.382 [2024-11-06 13:26:23.046818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.382 qpair failed and we were unable to recover it. 00:29:41.382 [2024-11-06 13:26:23.047182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.047211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.047578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.047608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.047974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.048006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.048163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.048195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.048574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.048604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.048968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.049000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.049329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.049358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.049722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.049767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.050138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.050168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.050600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.050628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.050998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.051029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.051339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.051368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.051734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.051775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.052107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.052138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.052523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.052553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.052909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.052940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.053306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.053335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.053673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.053703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.054085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.054115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.054474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.054503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.054864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.054894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.055264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.055293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.055733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.055774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.056164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.056194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.056436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.056464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.056824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.056855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.057222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.057252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.057604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.057634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.058008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.058038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.058394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.058422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.058785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.058817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.059204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.059233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.059594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.059623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.059863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.383 [2024-11-06 13:26:23.059896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.383 qpair failed and we were unable to recover it. 00:29:41.383 [2024-11-06 13:26:23.060263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.060291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.060658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.060687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.060933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.060964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.061306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.061336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.061578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.061610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.061984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.062022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.062361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.062391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.062740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.062786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.063166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.063195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.063431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.063461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.063850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.063882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.064237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.064267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.064641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.064669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.065011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.065042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.065413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.065442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.065802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.065835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.066250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.066281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.066671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.066699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.067119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.067149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.067510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.067539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.067892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.067923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.068181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.068210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.068574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.068602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.068877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.068907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.069272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.069301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.069666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.069695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.070062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.070091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.070454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.070485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.070870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.070900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.071284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.071312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.071674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.071702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.072121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.072151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.072520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.072550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.072838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.072869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.073097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.073128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.073524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.073554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.073917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.073947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.074318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.384 [2024-11-06 13:26:23.074347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.384 qpair failed and we were unable to recover it. 00:29:41.384 [2024-11-06 13:26:23.074772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.074804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.075150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.075179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.075552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.075580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.075918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.075950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.076284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.076312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.076677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.076708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.077071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.077101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.077429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.077465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.077823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.077854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.078221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.078250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.078528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.078556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.078926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.078956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.079319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.079349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.079719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.079759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.080110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.080139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.080500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.080529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.080892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.080922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.081308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.081336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.081707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.081736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.082187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.082217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.082435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.082466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.082833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.082864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.083220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.083249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.083621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.083651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.083995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.084027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.084389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.084418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.084783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.084815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.085214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.085243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.085655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.085685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.086047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.086079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.086424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.086453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.086676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.086707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.087090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.087121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.087547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.087575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.087904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.087942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.088284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.088314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.088618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.088646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.088901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.088934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.385 qpair failed and we were unable to recover it. 00:29:41.385 [2024-11-06 13:26:23.089278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.385 [2024-11-06 13:26:23.089307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.089561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.089591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.089828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.089858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.090218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.090248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.090620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.090650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.091031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.091061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.091423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.091451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.091821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.091850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.092191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.092221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.092578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.092608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.092978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.093009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.093374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.093403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.093779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.093811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.094166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.094194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.094570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.094599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.094968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.094999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.095363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.095393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.095760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.095790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.096118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.096146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.096503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.096532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.096885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.096923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.097296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.097326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.097677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.097707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.098078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.098110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.098465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.098495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.098849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.098879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.099229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.099259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.099615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.099645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.100001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.100032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.100280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.100312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.100552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.100581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.100928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.100959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.101319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.101347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.101710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.101739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.102095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.102124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.102494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.102523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.102884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.102921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.103350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.103379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.386 [2024-11-06 13:26:23.103758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.386 [2024-11-06 13:26:23.103790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.386 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.104036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.104068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.104521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.104551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.104880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.104909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.105282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.105312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.105686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.105715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.106106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.106136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.106501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.106531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.106977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.107008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.107354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.107382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.107765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.107796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.108210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.108239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.108620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.108651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.109005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.109036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.109281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.109313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.109661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.109690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.110068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.110098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.110447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.110477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.110840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.110871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.111244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.111273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.111625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.111655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.112056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.112087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.112337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.112365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.112648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.112679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.113029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.113061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.113425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.113455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.113702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.113730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.114128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.114157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.114525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.114554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.114913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.114943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.115308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.115339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.115694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.115723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.116105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.116135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.116370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.116398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.116772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.116803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.117174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.117202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.117444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.117475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.117843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.117874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.118311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.387 [2024-11-06 13:26:23.118348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.387 qpair failed and we were unable to recover it. 00:29:41.387 [2024-11-06 13:26:23.118699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.118728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.119091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.119121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.119489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.119517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.119890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.119921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.120284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.120314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.120672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.120701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.121045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.121075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.121437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.121466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.121821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.121852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.122215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.122245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.122594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.122624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.122993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.123023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.123231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.123261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.123663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.123693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.124042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.124072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.124433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.124463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.124717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.124763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.125140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.125170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.125523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.125552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.125771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.125802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.126182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.126211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.126583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.126613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.126994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.127025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.127388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.127417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.127780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.127811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.128186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.128215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.128575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.128605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.128973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.129003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.129375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.129404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.129766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.129795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.130157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.130186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.130471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.130500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.130870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.130901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.131257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.131286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.388 [2024-11-06 13:26:23.131659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.388 [2024-11-06 13:26:23.131687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.388 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.132096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.132126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.132466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.132494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.132836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.132866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.133244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.133275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.133497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.133535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.133898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.133928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.134294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.134323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.134677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.134706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.135057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.135087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.135460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.135489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.135846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.135878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.136242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.136271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.136635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.136664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.137025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.137056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.137422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.137451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.137816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.137848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.138229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.138258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.138625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.138653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.138992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.139024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.139352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.139380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.139634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.139663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.140021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.140052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.140415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.140443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.140804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.140833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.141245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.141273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.141672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.141702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.142082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.142111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.142481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.142510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.142875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.142906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.143269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.143298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.143674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.143702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.143970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.144004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.144378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.144407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.144783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.144814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.145087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.145117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.145494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.145523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.145890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.145920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.389 [2024-11-06 13:26:23.146291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.389 [2024-11-06 13:26:23.146320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.389 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.146677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.146707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.147068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.147099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.147465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.147493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.147743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.147800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.148102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.148138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.148495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.148524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.148884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.148923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.149357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.149387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.149733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.149776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.150157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.150186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.150546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.150575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.150932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.150962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.151319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.151349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.151708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.151737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.152118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.152148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.152503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.152533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.152924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.152955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.153338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.153366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.153601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.153634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.153917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.153947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.154314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.154345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.154694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.154725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.155098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.155128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.155376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.155405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.155768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.155799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.155963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.155995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.156260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.156288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.156664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.156695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.157069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.157100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.157351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.157380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.157645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.157673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.158044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.158075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.158423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.158454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.158862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.158892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.159268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.159297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.159655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.159683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.160070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.160100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.160460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.390 [2024-11-06 13:26:23.160489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.390 qpair failed and we were unable to recover it. 00:29:41.390 [2024-11-06 13:26:23.160843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.160873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.161241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.161272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.161630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.161659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.162027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.162057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.162417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.162446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.162800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.162831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.163101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.163130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.163500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.163530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.163901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.163937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.164293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.164322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.164694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.164723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.165087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.165118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.165464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.165494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.165840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.165870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.166241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.166270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.166627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.166655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.167022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.167051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.167449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.167479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.167788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.167818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.168178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.168207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.168570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.168600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.168969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.169000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.169255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.169283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.169633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.169662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.170002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.170033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.170398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.170427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.170681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.170710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.170992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.171023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.171370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.171400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.171779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.171810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.172044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.172075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.172448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.172479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.172847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.172879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.173261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.173290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.173652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.173681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.174131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.174161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.174535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.174565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.391 qpair failed and we were unable to recover it. 00:29:41.391 [2024-11-06 13:26:23.174940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.391 [2024-11-06 13:26:23.174971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.175344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.175372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.175729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.175769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.176119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.176148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.176508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.176537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.176906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.176936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.177298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.177327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.177693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.177722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.178184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.178215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.178558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.178587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.178878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.178909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.179263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.179300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.179646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.179676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.180023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.180054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.180414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.180445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.180804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.180837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.181212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.181244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.181610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.181639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.182001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.182035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.182388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.182418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.182770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.182801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.183158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.183186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.183453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.183482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.183777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.183809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.184182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.184212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.184584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.184614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.184974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.185004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.185372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.185401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.185769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.185798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.186178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.186207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.186523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.186553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.186914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.186944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.187380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.187408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.187650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.187682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.188041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.188072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.392 [2024-11-06 13:26:23.188314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.392 [2024-11-06 13:26:23.188345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.392 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.188705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.188734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.189109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.189141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.189482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.189511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.189878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.189910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.190268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.190297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.190659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.190689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.191074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.191104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.191470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.191500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.191877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.191908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.192283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.192313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.192693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.192723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.193117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.193147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.193511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.193539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.193907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.193939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.194310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.194339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.194731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.194777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.195178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.195207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.195559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.195588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.195931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.195961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.196304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.196335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.196595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.196624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.196979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.197010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.197341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.197371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.197728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.197772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.198110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.198139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.198509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.198540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.198894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.198924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.199284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.199314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.199684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.199713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.200049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.200079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.200440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.200469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.200733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.200778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.201156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.201185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.201447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.201478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.201865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.201897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.202246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.393 [2024-11-06 13:26:23.202276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.393 qpair failed and we were unable to recover it. 00:29:41.393 [2024-11-06 13:26:23.202616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.202644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.202898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.202927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.203176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.203209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.203567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.203596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.203964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.203994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.204355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.204385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.204766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.204798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.205159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.205189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.205525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.205554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.205919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.205951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.206339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.206370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.206738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.206781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.207142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.207172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.207575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.207605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.208029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.208060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.208303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.208332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.208669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.208697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.209064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.209095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.209489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.209519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.209768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.209806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.210202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.210232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.210576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.210605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.210996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.211027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.211455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.211484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.211840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.211870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.212232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.212262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.212620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.212650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.212908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.212939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.213309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.213338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.213699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.213729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.214138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.214169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.214512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.214542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.214803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.214837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.215285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.215314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.215683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.215712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.215965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.215996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.216348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.216376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.216730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.216772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.394 qpair failed and we were unable to recover it. 00:29:41.394 [2024-11-06 13:26:23.217048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.394 [2024-11-06 13:26:23.217077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.217443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.217480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.217822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.217855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.218214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.218244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.218630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.218658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.219042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.219074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.219334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.219365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.219766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.219796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.220142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.220172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.220559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.220588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.395 [2024-11-06 13:26:23.220983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.395 [2024-11-06 13:26:23.221015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.395 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.221259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.221290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.221645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.221676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.221966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.221997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.222337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.222367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.222734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.222781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.223159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.223188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.223422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.223451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.223806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.223837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.224216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.224246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.224480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.224512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.224865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.224902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.225169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.225198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.225578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.225608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.226011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.669 [2024-11-06 13:26:23.226042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.669 qpair failed and we were unable to recover it. 00:29:41.669 [2024-11-06 13:26:23.226410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.226439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.226810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.226841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.227198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.227228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.227591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.227623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.227891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.227922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.228305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.228334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.228671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.228701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.229070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.229102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.229459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.229488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.229859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.229890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.230230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.230259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.230603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.230631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.231004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.231035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.231407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.231436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.231801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.231831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.232095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.232127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.232373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.232402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.232779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.232810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.233172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.233204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.233573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.233602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.233965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.233996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.234350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.234380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.234720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.234774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.235021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.235054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.235348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.235378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.235761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.235794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.236036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.236065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.236442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.236472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.236831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.236863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.237220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.237249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.237609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.237639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.237926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.237956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.238316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.238345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.238711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.238739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.239081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.239113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.239480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.239510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.239818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.239855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.240216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.670 [2024-11-06 13:26:23.240245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.670 qpair failed and we were unable to recover it. 00:29:41.670 [2024-11-06 13:26:23.240587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.240616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.240993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.241025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.241384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.241413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.241783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.241814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.242237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.242266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.242636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.242665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.243030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.243061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.243309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.243337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.243691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.243720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.244083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.244114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.244480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.244509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.244877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.244909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.245245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.245274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.245632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.245661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.246025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.246055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.246300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.246328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.246613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.246641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.246894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.246926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.247291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.247319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.247776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.247806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.248167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.248195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.248430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.248461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.248759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.248790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.249150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.249181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.249541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.249569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.249907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.249939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.250307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.250336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.250698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.250727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.251125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.251155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.251488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.251518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.251882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.251912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.252288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.252316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.252676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.252704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.253077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.253106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.253479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.253508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.253883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.253912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.254309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.254337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.254706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.671 [2024-11-06 13:26:23.254734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.671 qpair failed and we were unable to recover it. 00:29:41.671 [2024-11-06 13:26:23.255131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.255168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.255503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.255531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.255793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.255823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.256060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.256090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.256454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.256483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.256845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.256875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.257250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.257278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.257643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.257671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.258037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.258067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.258441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.258470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.258829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.258859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.259228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.259257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.259628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.259656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.260006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.260035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.260405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.260434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.260803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.260834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.261213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.261242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.261613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.261643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.262004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.262034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.262398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.262427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.262638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.262669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.263031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.263061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.263417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.263447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.263794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.263824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.264095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.264124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.264475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.264504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.264872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.264903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.265264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.265294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.265657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.265687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.266065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.266095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.266464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.266493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.266867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.266898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.267159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.267188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.267546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.267574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.267916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.267945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.268190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.268222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.268594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.268623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.268985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.672 [2024-11-06 13:26:23.269014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.672 qpair failed and we were unable to recover it. 00:29:41.672 [2024-11-06 13:26:23.269387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.269415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.269788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.269818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.270098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.270132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.270366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.270399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.270735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.270777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.271075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.271103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.271471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.271499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.271862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.271894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.272295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.272324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.272671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.272700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.272965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.272997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.273417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.273777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.273807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.274173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.274201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.274461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.274490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.274824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.274855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.275215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.275245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.275581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.275610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.275968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.276000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.276355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.276384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.276764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.276794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.277236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.277266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.277612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.277641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.278060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.278090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.278445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.278473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.278824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.278854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.279223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.279252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.279615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.279644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.280005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.280035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.280391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.280420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.280767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.280798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.281154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.281183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.281559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.281587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.281971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.282000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.282231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.282262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.282617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.282647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.282998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.283028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.283399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.283429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.673 [2024-11-06 13:26:23.283791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.673 [2024-11-06 13:26:23.283822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.673 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.284195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.284224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.284569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.284598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.284970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.285001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.285357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.285385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.285795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.285826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.286163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.286191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.286529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.286557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.286860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.286889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.287182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.287212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.287639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.287667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.288000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.288033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.288398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.288427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.288790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.288820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.289218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.289247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.289610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.289638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.289994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.290024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.290379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.290408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.290783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.290813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.291168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.291196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.291440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.291468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.291846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.291877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.292258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.292286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.292640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.292668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.293045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.293075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.293434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.293463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.293921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.293952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.294312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.294340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.294704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.294733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.294982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.295014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.295371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.295400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.295787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.295825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.295989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.674 [2024-11-06 13:26:23.296017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.674 qpair failed and we were unable to recover it. 00:29:41.674 [2024-11-06 13:26:23.296423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.296452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.296691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.296722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.297116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.297146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.297511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.297541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.297909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.297939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.298306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.298335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.298688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.298716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.299138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.299167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.299386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.299416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.299795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.299826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.299953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.299983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.300352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.300382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.300778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.300809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.301189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.301218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.301436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.301468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.301847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.301878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.302242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.302272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.302626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.302655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.303024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.303055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.303388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.303417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.303784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.303815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.304046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.304076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.304317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.304348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.304692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.304721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.305083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.305113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.305355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.305385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.305739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.305780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.306145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.306173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.306543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.306571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.306922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.306952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.307321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.307351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.307709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.307738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.308085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.308115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.308471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.308500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.308764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.308795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.309143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.309173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.309512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.309540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.309906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.309937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.675 qpair failed and we were unable to recover it. 00:29:41.675 [2024-11-06 13:26:23.310188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.675 [2024-11-06 13:26:23.310227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.310600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.310629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.311005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.311035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.311391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.311420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.311770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.311800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.312198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.312226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.312596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.312632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.312985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.313015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.313357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.313387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.313765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.313795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.314142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.314172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.314531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.314560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.314927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.314957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.315338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.315367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.315732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.315790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.316156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.316185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.316548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.316577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.316969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.317000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.317362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.317390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.317638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.317669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.317944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.317975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.318345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.318373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.318732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.318773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.319138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.319166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.319408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.319436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.319802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.319833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.320221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.320250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.320633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.320662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.321022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.321052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.321403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.321432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.321798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.321829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.322205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.322234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.322606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.322636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.322987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.323016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.323378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.323407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.323740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.323784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.324111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.324140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.324519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.324547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.676 [2024-11-06 13:26:23.324918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.676 [2024-11-06 13:26:23.324948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.676 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.325391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.325420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.325786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.325821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.326169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.326198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.326571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.326600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.326974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.327003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.327348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.327376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.327740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.327779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.328025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.328053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.328422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.328450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.328812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.328843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.329252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.329280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.329616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.329646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.330002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.330032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.330396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.330424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.330788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.330817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.331214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.331243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.331648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.331677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.332066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.332097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.332456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.332485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.332841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.332872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.333213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.333242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.333612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.333641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.333984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.334015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.334261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.334293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.334665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.334694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.334926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.334957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.335337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.335365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.335627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.335655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.336024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.336055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.336280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.336310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.336554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.336587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.336842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.336876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.337256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.337285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.337581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.337609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.337978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.338009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.338374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.338403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.338765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.338795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.339164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.677 [2024-11-06 13:26:23.339193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.677 qpair failed and we were unable to recover it. 00:29:41.677 [2024-11-06 13:26:23.339547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.339576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.339950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.339980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.340339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.340367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.340816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.340855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.341186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.341215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.341443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.341473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.341842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.341873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.342282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.342310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.342659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.342688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.343049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.343079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.343446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.343475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.343856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.343886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.344149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.344178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.344543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.344572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.344936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.344967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.345326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.345355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.345718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.345757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.346117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.346146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.346517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.346546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.346910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.346939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.347299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.347327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.347568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.347596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.347892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.347921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.348290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.348318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.348676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.348705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.349165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.349195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.349533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.349563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.349831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.349861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.350247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.350275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.350646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.350674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.351038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.351068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.351320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.351352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.351664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.351692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.678 qpair failed and we were unable to recover it. 00:29:41.678 [2024-11-06 13:26:23.352065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.678 [2024-11-06 13:26:23.352095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.352351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.352379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.352817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.352848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.353207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.353236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.353605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.353633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.353986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.354018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.354432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.354461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.354796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.354825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.355181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.355209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.355602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.355905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.355957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.356251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.356281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.356660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.356689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.356965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.356994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.357380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.357409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.357791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.357824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.358206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.358235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.358593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.358623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.358979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.359009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.359372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.359400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.359769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.359798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.360071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.360100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.360455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.360483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.360869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.360900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.361282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.361311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.361652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.361681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.362047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.362078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.362444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.362472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.362830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.362860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.363203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.363233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.363596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.363624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.363957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.363986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.364360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.364389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.364766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.364796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.365228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.365256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.365625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.365653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.366017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.366047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.366409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.366439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.679 [2024-11-06 13:26:23.366808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.679 [2024-11-06 13:26:23.366838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.679 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.367207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.367235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.367580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.367608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.367970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.368001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.368398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.368426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.368786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.368817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.369059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.369087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.369449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.369478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.369848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.369879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.370319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.370347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.370600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.370629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.370953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.370982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.371244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.371279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.371653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.371682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.372043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.372073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.372326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.372358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.372756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.372787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.373159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.373188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.373552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.373580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.373842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.373872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.374257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.374286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.374655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.374683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.375027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.375058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.375418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.375448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.375811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.375841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.376117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.376146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.376508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.376537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.376904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.376935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.377294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.377322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.377571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.377600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.377990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.378021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.378400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.378429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.378810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.378841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.379070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.379101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.379463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.379493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.379775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.379805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.380183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.380211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.380579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.380608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.380968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.680 [2024-11-06 13:26:23.380999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.680 qpair failed and we were unable to recover it. 00:29:41.680 [2024-11-06 13:26:23.381374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.381403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.381784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.381816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.382063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.382095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.382473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.382502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.382802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.382832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.383189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.383217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.383446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.383477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.383720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.383761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.384109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.384138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.384499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.384527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.384894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.384924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.385163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.385194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.385557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.385587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.385922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.385961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.386321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.386351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.386723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.386763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.387202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.387231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.387566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.387595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.387976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.388005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.388345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.388374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.388735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.388775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.389145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.389174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.389538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.389566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.389894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.389926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.390273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.390301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.390641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.390670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.391034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.391065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.391428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.391457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.391797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.391827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.392186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.392214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.392560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.392589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.392915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.392954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.393287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.393315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.393677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.393705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.393970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.393999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.394371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.394400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.394761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.394792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.395057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.681 [2024-11-06 13:26:23.395089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.681 qpair failed and we were unable to recover it. 00:29:41.681 [2024-11-06 13:26:23.395438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.395468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.395838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.395868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.396232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.396261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.396625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.396653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.397007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.397036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.397403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.397432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.397798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.397826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.398199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.398227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.398574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.398603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.398831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.398863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.399237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.399267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.399646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.399675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.400006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.400037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.400389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.400419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.400780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.400810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.401164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.401198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.401535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.401563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.401930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.401961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.402343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.402372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.402740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.402782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.403128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.403156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.403413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.403442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.403787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.403817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.404206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.404234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.404675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.404704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.405015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.405044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.405394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.405423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.405645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.405676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.406012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.406042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.406415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.406445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.406802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.406832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.407198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.407227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.407582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.407610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.407979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.408009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.408365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.408394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.408770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.408800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.409056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.409084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.409449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.409478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.682 qpair failed and we were unable to recover it. 00:29:41.682 [2024-11-06 13:26:23.409873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.682 [2024-11-06 13:26:23.409903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.410252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.410281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.410660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.410689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.411054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.411084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.411458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.411488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.411845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.411876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.412238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.412266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.412630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.412658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.413033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.413062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.413437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.413466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.413830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.413860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.414119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.414147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.414519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.414547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.414892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.414923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.415263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.415291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.415713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.415742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.416145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.416174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.416539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.416573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.416928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.416959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.417327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.417356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.417720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.417768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.418115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.418144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.418427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.418455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.418705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.418734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.419138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.419167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.419400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.419431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.419766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.419797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.420169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.420198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.420561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.420589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.420818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.420850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.421221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.421249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.421629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.421659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.683 qpair failed and we were unable to recover it. 00:29:41.683 [2024-11-06 13:26:23.421929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.683 [2024-11-06 13:26:23.421959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.422239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.422267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.422623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.422652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.422992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.423023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.423360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.423388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.423639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.423670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.424044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.424074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.424436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.424464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.424828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.424858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.425117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.425146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.425496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.425525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.425961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.425992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.426376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.426406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.426769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.426800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.427155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.427183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.427560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.427589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.427927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.427957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.428331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.428359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.428729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.428770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.429008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.429040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.429408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.429437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.429793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.429824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.430077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.430105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.430468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.430498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.430825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.430855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.431102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.431139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.431533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.431563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.431920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.431951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.432319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.432347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.432721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.432759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.433117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.433144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.433497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.433526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.433896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.433928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.434217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.434245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.434630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.434659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.435004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.435034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.435393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.435422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.435765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.435795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.684 [2024-11-06 13:26:23.436150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.684 [2024-11-06 13:26:23.436179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.684 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.436539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.436568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.436924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.436954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.437254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.437283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.437591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.437619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.437975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.438007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.438371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.438399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.438768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.438799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.439151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.439182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.439550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.439579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.439826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.439858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.440241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.440271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.440649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.440679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.441017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.441047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.441422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.441453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.441819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.441850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.442224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.442255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.442614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.442644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.443002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.443032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.443408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.443438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.443681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.443713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.444080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.444112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.444469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.444500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.444863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.444897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.445326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.445355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.445711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.445740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.446115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.446144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.446525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.446562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.446912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.446942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.447298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.447327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.447685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.447715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.448084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.448114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.448475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.448504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.448856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.448888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.449153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.449184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.449549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.449579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.449917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.449946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.450366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.450396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.450766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.685 [2024-11-06 13:26:23.450798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.685 qpair failed and we were unable to recover it. 00:29:41.685 [2024-11-06 13:26:23.451057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.451086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.451438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.451467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.451821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.451852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.452219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.452249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.452601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.452630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.452967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.452997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.453339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.453369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.453737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.453789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.454062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.454090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.454449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.454478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.454838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.454869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.455250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.455281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.455641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.455671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.456051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.456082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.456428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.456456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.456813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.456844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.457214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.457244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.457604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.457634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.457997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.458027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.458270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.458302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.458674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.458704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.459126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.459156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.459496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.459525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.459892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.459922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.460282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.460311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.460665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.460694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.461069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.461098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.461458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.461487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.461857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.461895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.462246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.462275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.462499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.462529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.462928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.462959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.463315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.463346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.463698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.463728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.464101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.464133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.464494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.464523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.464923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.464955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.465312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.686 [2024-11-06 13:26:23.465342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.686 qpair failed and we were unable to recover it. 00:29:41.686 [2024-11-06 13:26:23.465709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.465738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.466167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.466198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.466548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.466576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.466964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.466996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.467346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.467375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.467732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.467775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.468125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.468154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.468521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.468550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.468916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.468946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.469313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.469343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.469713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.469768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.470214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.470244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.470580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.470612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.470972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.471003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.471373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.471403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.471767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.471797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.472064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.472094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.472465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.472495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.472892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.472924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.473304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.473333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.473697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.473727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.474107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.474137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.474517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.474547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.474943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.474973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.475344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.475373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.475741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.475782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.476155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.476184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.476543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.476574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.476778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.476812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.477196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.477226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.477530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.477558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.477917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.477948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.478289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.478319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.687 [2024-11-06 13:26:23.478674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.687 [2024-11-06 13:26:23.478704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.687 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.479070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.479102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.479457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.479486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.479848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.479880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.480036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.480066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.480317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.480346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.480591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.480620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.481006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.481037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.481432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.481461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.481828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.481858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.482322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.482351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.482725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.482767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.483018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.483049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.483397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.483427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.483780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.483810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.484139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.484168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.484515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.484544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.484803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.484834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.485198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.485227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.485609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.485638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.486064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.486094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.486481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.486510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.486853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.486883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.487221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.487251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.487616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.487657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.488005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.488036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.488415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.488444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.488810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.488841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.489239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.489268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.489499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.489527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.489904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.489934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.490357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.490386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.490756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.490794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.491143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.491172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.491521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.491549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.491911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.491940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.492319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.492348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.492646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.492681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.493075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.688 [2024-11-06 13:26:23.493105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.688 qpair failed and we were unable to recover it. 00:29:41.688 [2024-11-06 13:26:23.493347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.493379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.493734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.493783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.494162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.494191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.494539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.494569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.494827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.494861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.495203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.495231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.495593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.495621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.495965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.495995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.496347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.496376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.496737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.496775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.497156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.497184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.497557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.497586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.497829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.497859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.498234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.498262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.498640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.498668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.498930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.498963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.499343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.499372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.499729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.499770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.500200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.500229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.500590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.500619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.501005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.501036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.501409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.501438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.501648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.501680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.502039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.502069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.502442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.502471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.502836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.502873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.503211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.503240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.503598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.503627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.503989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.504019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.504383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.504412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.504681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.504711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.505100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.505130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.505459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.505488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.505783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.505813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.506192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.506220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.506588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.506617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.507002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.507032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.507386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.689 [2024-11-06 13:26:23.507413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.689 qpair failed and we were unable to recover it. 00:29:41.689 [2024-11-06 13:26:23.507785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.507814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.508192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.508221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.508576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.508604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.508985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.509015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.509374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.509404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.509681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.509709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.510103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.510132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.510478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.510507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.510955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.510985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.511340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.511368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.511727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.511767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.512104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.512134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.512473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.512502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.512869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.512901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.513149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.513178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.513597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.513627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.513981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.514010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.514317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.514345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.514691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.514720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.515069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.515098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.515463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.515491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.515848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.515880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.516256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.516284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.516647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.516676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.517044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.517074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.517440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.517468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.517918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.517947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.518314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.518348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.518776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.518807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.519168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.519196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.519562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.519592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.519962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.519992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.520259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.520287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.520637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.520666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.521088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.521118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.521473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.521503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.521862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.521892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.522272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.522301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.690 qpair failed and we were unable to recover it. 00:29:41.690 [2024-11-06 13:26:23.522661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.690 [2024-11-06 13:26:23.522689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.523080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.523110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.523484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.523513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.523876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.523906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.524241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.524270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.524622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.524651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.524947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.524976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.525338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.525366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.525731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.525770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.526026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.526058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.526399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.526429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.526804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.526834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.527216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.527244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.527611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.527639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.527995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.528025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.528380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.528409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.528802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.528832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.529206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.529234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.529591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.529619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.530008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.530039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.530390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.530419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.530776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.530807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.531160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.531188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.531561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.531590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.531967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.531997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.532345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.532374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.532826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.532856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.533228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.533258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.533619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.533647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.533987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.534023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.534363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.534392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.534764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.534794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.535153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.535182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.535541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.535571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.535930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.535960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.536340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.536368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.536731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.536769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.537126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.537155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.691 [2024-11-06 13:26:23.537517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.691 [2024-11-06 13:26:23.537546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.691 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.537913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.537943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.538302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.538331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.538704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.538732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.539065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.539095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.539450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.539479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.539837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.539868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.540239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.540267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.540638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.540667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.540941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.540971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.541384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.541413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.541840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.541870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.542245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.542276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.542645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.542674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.543044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.543073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.543437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.543466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.543833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.543863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.544236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.544266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.544652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.544682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.544939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.544971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.545386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.545416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.545776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.545806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.546166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.546194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.546645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.546674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.546965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.546995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.547349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.547378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.547588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.547619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.548060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.548091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.548332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.548363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.548716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.548755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.549160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.549189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.549539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.549576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.549924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.549955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.550207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.692 [2024-11-06 13:26:23.550236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.692 qpair failed and we were unable to recover it. 00:29:41.692 [2024-11-06 13:26:23.550466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.550497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.550864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.550893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.551253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.551282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.551728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.551770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.552132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.552161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.552523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.552552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.552918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.552949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.553319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.553348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.553630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.553658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.554046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.554076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.554436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.554465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.554831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.554861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.693 [2024-11-06 13:26:23.555212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.693 [2024-11-06 13:26:23.555241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.693 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.555611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.555642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.555986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.556016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.556378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.556406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.556656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.556688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.557065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.557096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.557466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.557494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.557853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.557883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.558239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.558268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.558633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.558663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.559035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.966 [2024-11-06 13:26:23.559065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.966 qpair failed and we were unable to recover it. 00:29:41.966 [2024-11-06 13:26:23.559434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.559463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.559822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.559852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.560219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.560248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.560608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.560638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.560882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.560915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.561276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.561305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.561667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.561697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.562060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.562091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.562372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.562402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.562630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.562662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.563035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.563067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.563426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.563457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.563827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.563873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.564263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.564292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.564721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.564786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.565045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.565074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.565456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.565485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.565828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.565857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.566224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.566253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.566506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.566538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.566901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.566932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.567288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.567317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.567677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.567706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.568087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.568117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.568460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.568490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.568872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.568902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.569267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.569296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.569662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.569692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.570085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.570116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.570360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.570391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.570767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.570797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.571160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.571188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.571559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.571588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.571968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.571998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.572344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.572374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.572728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.572767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.573090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.573119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.573488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.573516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.967 qpair failed and we were unable to recover it. 00:29:41.967 [2024-11-06 13:26:23.573764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.967 [2024-11-06 13:26:23.573794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.574167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.574196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.574575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.574604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.574974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.575004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.575348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.575377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.575756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.575787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.576048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.576076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.576470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.576499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.576726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.576780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.577146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.577175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.577536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.577566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.577937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.577968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.578217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.578248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.578603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.578633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.578965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.578995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.579351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.579379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.579736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.579781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.580125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.580153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.580520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.580549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.580920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.580951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.581208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.581236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.581483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.581511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.581836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.581867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.582212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.582242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.582604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.582634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.582974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.583004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.583366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.583394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.583765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.583794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.584174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.584202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.584558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.584587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.584955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.584987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.585227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.585258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.585554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.585583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.585967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.585998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.586360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.586390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.586743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.586782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.586982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.587014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.968 qpair failed and we were unable to recover it. 00:29:41.968 [2024-11-06 13:26:23.587397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.968 [2024-11-06 13:26:23.587426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.587786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.587816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.588216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.588244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.588500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.588528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.588893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.588923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.589286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.589316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.589685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.589715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.590088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.590118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.590468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.590496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.590865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.590896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.591153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.591180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.591525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.591553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.591973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.592003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.592359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.592387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.592724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.592775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.593100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.593130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.593495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.593524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.593888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.593918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.594282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.594310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.594603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.594638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.594879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.594911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.595283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.595311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.595658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.595686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.596052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.596081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.596446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.596475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.596731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.596773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.597103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.597130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.597511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.597539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.597782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.597814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.598181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.598209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.598573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.598602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.598969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.598999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.599364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.599393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.599649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.599679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.599960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.599990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.600360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.600388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.969 qpair failed and we were unable to recover it. 00:29:41.969 [2024-11-06 13:26:23.600779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.969 [2024-11-06 13:26:23.600811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.601161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.601189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.601539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.601568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.601957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.601988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.602234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.602262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.602609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.602637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.603003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.603034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.603280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.603312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.603563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.603591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.603978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.604008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.604366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.604396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.604761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.604791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.605077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.605105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.605481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.605510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.605764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.605794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.606119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.606147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.606505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.606533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.606902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.606933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.607293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.607322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.607686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.607714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.608104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.608134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.608495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.608523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.608906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.608937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.609299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.609336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.609733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.609773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.610192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.610220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.610576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.610606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.610971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.611000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.611355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.611384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.611636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.611668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.611981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.612009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.612359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.612387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.970 [2024-11-06 13:26:23.612618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.970 [2024-11-06 13:26:23.612646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.970 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.613037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.613067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.613432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.613463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.613718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.613760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.614020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.614049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.614418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.614447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.614793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.614823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.615201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.615230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.615597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.615626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.615988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.616017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.616397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.616426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.616776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.616807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.617166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.617195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.617562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.617591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.617950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.617979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.618335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.618364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.618677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.618706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.619065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.619096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.619456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.619485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.619871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.619900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.620263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.620292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.620658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.620686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.620928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.971 [2024-11-06 13:26:23.620960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.971 qpair failed and we were unable to recover it. 00:29:41.971 [2024-11-06 13:26:23.621213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.621242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.621604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.621632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.621978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.622009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.622380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.622409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.622764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.622795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.623161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.623190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.623503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.623532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.623910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.623940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.624244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.624279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.624635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.624664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.625060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.625091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.625454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.625482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.625844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.625876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.626298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.626327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.626569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.626598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.626976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.627006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.627369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.627398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.627765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.627795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.628165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.628194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.628568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.628596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.628984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.629014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.629382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.629412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.629779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.629809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.630185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.630215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.630457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.630490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.630908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.630938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.631299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.631327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.631572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.631600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.631951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.631980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.632339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.632368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.632726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.632768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.633128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.633157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.633522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.633550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.633948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.633978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.972 qpair failed and we were unable to recover it. 00:29:41.972 [2024-11-06 13:26:23.634343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.972 [2024-11-06 13:26:23.634372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.634769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.634801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.635196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.635225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.635471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.635499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.635847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.635877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.636247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.636275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.636653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.636681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.637039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.637069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.637447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.637475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.637826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.637856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.638225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.638254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.638623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.638651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.639018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.639047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.639415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.639444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.639813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.639854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.640207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.640236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.640580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.640609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.640970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.640999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.641357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.641386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.641761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.641791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.642165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.642194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.642558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.642586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.642970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.642999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.643418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.643447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.643727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.643768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.644137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.644166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.644503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.644533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.644901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.644931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.645294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.645323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.645682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.645710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.646065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.646095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.646455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.646483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.646839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.973 [2024-11-06 13:26:23.646869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.973 qpair failed and we were unable to recover it. 00:29:41.973 [2024-11-06 13:26:23.647260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.647289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.647637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.647666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.648040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.648070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.648430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.648459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.648826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.648855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.649230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.649258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.649611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.649641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.650004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.650035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.650402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.650431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.650639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.650667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.650909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.650942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.651292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.651321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.651686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.651716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.652087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.652116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.652480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.652509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.652867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.652897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.653335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.653363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.653728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.653766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.654134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.654164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.654530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.654558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.654923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.654954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.655325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.655360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.655716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.655757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.656136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.656166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.656397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.656429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.656663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.656694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.657067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.657099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.974 [2024-11-06 13:26:23.657465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.974 [2024-11-06 13:26:23.657494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.974 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.657868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.657899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.658247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.658277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.658647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.658679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.659060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.659090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.659447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.659475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.659836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.659867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.660241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.660270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.660630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.660660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.660995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.661027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.661391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.661420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.661684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.661713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.662070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.662101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.662367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.662397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.662744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.662787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.663144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.663177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.663557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.663585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.663963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.663993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.664428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.664458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.664813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.664844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.665237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.665265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.665640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.665672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.666033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.666062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.666418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.666446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.666701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.666733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.667109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.667138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.667508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.667536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.667976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.668007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.668331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.668360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.668721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.975 [2024-11-06 13:26:23.668761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.975 qpair failed and we were unable to recover it. 00:29:41.975 [2024-11-06 13:26:23.669122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.669150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.669494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.669523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.669911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.669941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.670319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.670348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.670709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.670739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.671124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.671154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.671513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.671546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.671920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.671951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.672299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.672330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.672539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.672567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.672929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.672960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.673339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.673370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.673616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.673648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.673873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.673905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.674222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.674251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.674629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.674658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.675025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.675055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.675410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.675439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.675802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.675835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.677095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.677149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.677543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.677574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.677915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.677949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.678280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.678311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.678675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.678706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.679084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.679114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.679467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.679497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.679855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.679888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.680258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.680288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.680658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.680688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.681049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.681080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.681435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.681465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.681829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.681869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.682227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.682258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.682666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.682697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.683079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.976 [2024-11-06 13:26:23.683111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.976 qpair failed and we were unable to recover it. 00:29:41.976 [2024-11-06 13:26:23.683478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.683509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.683805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.683836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.684199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.684228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.684674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.684703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.685078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.685109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.685444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.685472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.685838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.685871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.686247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.686277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.686653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.686682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.687077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.687108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.687353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.687387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.687773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.687806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.688172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.688202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.688552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.688582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.688951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.688981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.689329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.689360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.689767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.689799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.690166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.690196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.690549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.690580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.690927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.690960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.691334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.691365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.691724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.691770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.692135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.692167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.692523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.692553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.692926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.692958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.693330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.693361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.693717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.693758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.694136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.694167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.694532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.694562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.694914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.694946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.977 [2024-11-06 13:26:23.695315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.977 [2024-11-06 13:26:23.695346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.977 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.695694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.695724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.696097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.696128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.696488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.696519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.696879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.696910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.697278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.697308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.697709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.697756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.698026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.698055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.698448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.698479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.698831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.698863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.699227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.699260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.699619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.699649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.699989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.700021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.700381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.700412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.700786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.700820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.701210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.701240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.701602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.701633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.702008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.702041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.702397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.702427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.702676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.702706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.703108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.703140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.703521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.703551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.703895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.703925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.704260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.704291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.704646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.704676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.705038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.705070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.705437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.705467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.705828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.705861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.706133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.706163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.706399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.706432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.706780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.706812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.707197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.707228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.707596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.707626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.707979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.708013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.708259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.708291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.708668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.708698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.709044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.709076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.709403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.709434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.978 [2024-11-06 13:26:23.709793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.978 [2024-11-06 13:26:23.709824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.978 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.710201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.710230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.710582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.710612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.710979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.711013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.711383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.711413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.711771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.711803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.712161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.712190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.712435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.712466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.712823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.712863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.713249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.713279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.713567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.713596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.714008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.714040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.714397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.714428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.714851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.714882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.715236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.715266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.715639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.715670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.716013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.716044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.716391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.716421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.716785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.716817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.717177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.717206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.717457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.717491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.717869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.717901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.718165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.718195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.718551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.718581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.718844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.718874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.719250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.719281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.719650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.719681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.720104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.720135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.720505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.720535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.720805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.720834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.721277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.721306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.721691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.721722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.722009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.722039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.722397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.722427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.722787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.722820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.723206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.723236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.723604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.723634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.723999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.724030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.979 [2024-11-06 13:26:23.724432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.979 [2024-11-06 13:26:23.724463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.979 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.724819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.724848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.725210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.725239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.725610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.725638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.726015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.726047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.726271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.726301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.726652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.726682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.727057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.727088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.727449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.727478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.727861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.727892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.728261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.728310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.728685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.728713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.729019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.729051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.729393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.729422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.729786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.729818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.730194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.730223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.730454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.730484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.730773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.730804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.731141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.731171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.731532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.731561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.731938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.731968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.732335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.732364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.732698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.732729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.733100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.733131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.733316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.733345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.733718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.733761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.734206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.734236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.734660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.734690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.735068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.735099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.735361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.735389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.735770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.735803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.736163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.736192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.736560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.736589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.736841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.736871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.737234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.737263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.737535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.737563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.737923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.737953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.738322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.738352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.738803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.980 [2024-11-06 13:26:23.738834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.980 qpair failed and we were unable to recover it. 00:29:41.980 [2024-11-06 13:26:23.739186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.739215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.739590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.739618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.739990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.740020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.740366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.740395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.740828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.740858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.741212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.741241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.741601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.741629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.742001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.742031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.742402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.742430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.742648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.742679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.743078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.743109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.743491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.743526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.743872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.743904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.744261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.744290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.744540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.744573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.744949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.744979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.745341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.745371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.745628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.745657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.746003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.746033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.746439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.746469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.746831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.746862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.747212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.747242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.747605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.747633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.747978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.748010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.748364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.748392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.748760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.748791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.749139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.749168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.749532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.749561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.749862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.749891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.750245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.750273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.750639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.750668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.750891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.750923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.751209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.751239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.981 [2024-11-06 13:26:23.751599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.981 [2024-11-06 13:26:23.751627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.981 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.751970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.752001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.752331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.752360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.752721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.752759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.753125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.753153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.753524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.753553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.753902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.753932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.754308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.754337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.754700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.754728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.755122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.755153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.755401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.755433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.755803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.755834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.756272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.756302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.756626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.756654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.757017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.757047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.757267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.757295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.757566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.757594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.757945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.757975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.758291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.758328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.758671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.758700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.759062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.759092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.759201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.759231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.759639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.759669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.759899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.759929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.760289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.760318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.760681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.760709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.761082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.761113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.761470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.761498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.761870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.761901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.762336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.762364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.762599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.762628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.763001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.763031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.763401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.763431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.763660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.763692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.764072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.764102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.764461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.764490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.764822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.764852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.765104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.765132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.765499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.982 [2024-11-06 13:26:23.765529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.982 qpair failed and we were unable to recover it. 00:29:41.982 [2024-11-06 13:26:23.765874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.765904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.766264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.766293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.766654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.766683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.766923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.766953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.767354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.767383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.767760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.767790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.768158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.768187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.768553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.768582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.768951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.768982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.769230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.769259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.769656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.769686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.770054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.770085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.770440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.770470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.770820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.770851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.771222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.771252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.771616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.771645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.771969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.772008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.772368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.772398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.772770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.772802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.773099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.773135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.773509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.773539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.773791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.773821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.774171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.774202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.774538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.774567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.774934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.774966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.775313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.775342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.775702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.775731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.776097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.776126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.776491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.776520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.776895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.776925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.777280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.777309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.777560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.777588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.777948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.777978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.778321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.778350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.778726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.778771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.779137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.779166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.779529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.779558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.779980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.983 [2024-11-06 13:26:23.780011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.983 qpair failed and we were unable to recover it. 00:29:41.983 [2024-11-06 13:26:23.780365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.780393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.780765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.780794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.781168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.781197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.781560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.781589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.781943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.781973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.782337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.782366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.782726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.782776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.783128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.783156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.783506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.783536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.783883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.783913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.784273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.784302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.784654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.784682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.785053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.785082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.785416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.785445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.785815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.785846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.786227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.786256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.786622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.786651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.786897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.786930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.787285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.787314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.787673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.787702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.788086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.788115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.788472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.788509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.788873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.788903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.789348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.789377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.789708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.789737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.790105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.790134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.790497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.790525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.790894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.790924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.791288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.791316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.791682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.791712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.791961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.791992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.792439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.792468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.792819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.792849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.793112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.793143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.793497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.793526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.793877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.793909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.794262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.794290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.794742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.794790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.984 qpair failed and we were unable to recover it. 00:29:41.984 [2024-11-06 13:26:23.795158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.984 [2024-11-06 13:26:23.795188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.795551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.795579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.795965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.795995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.796239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.796269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.796646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.796675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.796928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.796960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.797327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.797356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.797730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.797768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.798128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.798157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.798520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.798549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.798898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.798929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.799298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.799328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.799691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.799720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.800086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.800115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.800496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.800524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.800880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.800910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.801274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.801302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.801662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.801691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.802093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.802123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.802483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.802511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.802872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.802901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.803112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.803143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.803497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.803526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.803893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.803936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.804171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.804202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.804572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.804602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.804858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.804888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.805239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.805267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.805609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.805639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.805979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.806009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.806365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.806394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.806645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.806677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.985 qpair failed and we were unable to recover it. 00:29:41.985 [2024-11-06 13:26:23.807034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.985 [2024-11-06 13:26:23.807065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.807434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.807464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.807834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.807864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.808243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.808272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.808618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.808646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.809015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.809046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.809410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.809440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.809802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.809832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.810194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.810223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.810477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.810505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.810866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.810896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.811269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.811298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.811668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.811697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.812012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.812042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.812396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.812425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.812792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.812823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.813064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.813096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.813456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.813485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.813793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.813823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.814180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.814210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.814567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.814596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.814980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.815010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.815364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.815394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.815768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.815798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.816161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.816190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.816580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.816608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.816975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.817007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.817370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.817399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.817763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.817793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.818170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.818198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.818579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.818608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.818944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.818981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.819355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.986 [2024-11-06 13:26:23.819384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.986 qpair failed and we were unable to recover it. 00:29:41.986 [2024-11-06 13:26:23.819765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.819796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.820046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.820078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.820330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.820361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.820733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.820775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.821110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.821138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.821381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.821413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.821791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.821823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.822179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.822208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.822466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.822495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.822851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.822881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.823240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.823269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.823632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.823660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.824026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.824056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.824411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.824440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.824838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.824869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.825122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.825153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.825520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.825549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.825889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.825920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.826289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.826318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.826635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.826671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.826907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.826941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.827336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.827365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.827728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.827767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.828017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.828046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.828427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.828457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.828715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.828767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.829160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.829189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.829544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.829572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.829915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.829945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.830312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.830341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.830692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.830720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.831104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.831134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.831505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.831535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.831887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.831917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.832158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.832186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.832549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.832577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.832950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.832980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.833354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.833383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.987 qpair failed and we were unable to recover it. 00:29:41.987 [2024-11-06 13:26:23.833644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.987 [2024-11-06 13:26:23.833680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.834067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.834098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.834466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.834494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.834852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.834882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.835280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.835309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.835677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.835707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.836147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.836178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.836537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.836566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.836912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.836942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.837304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.837334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.837713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.837741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.838125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.838154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.838518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.838546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.838895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.838926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.839378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.839408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.839764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.839795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.840156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.840185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.840547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.840575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.840932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.840962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.841301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.841330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.841692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.841721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.842083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.842112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.842481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.842510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.842778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.842809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.843186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.843215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.843577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.843606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.843980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.844010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.844382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.844411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.844578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.844608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.844838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.844870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.845233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.845262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.845633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.845662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.846040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.846070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.846431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.846460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.846818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.988 [2024-11-06 13:26:23.846848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.988 qpair failed and we were unable to recover it. 00:29:41.988 [2024-11-06 13:26:23.847210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.847239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.847607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.847635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.847993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.848023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.848270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.848299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.848668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.848697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.849071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.849108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.849465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.849494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.849866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.849896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.850278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.850308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.850547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.850577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.850933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.850963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.851341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.851370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.851625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.851653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.851943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.851973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.852317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.852348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.852758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.852789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.853149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.853177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.853551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.853580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:41.989 [2024-11-06 13:26:23.853941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.989 [2024-11-06 13:26:23.853970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:41.989 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.854359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.854391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.854624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.854657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.855016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.855046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.855393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.855422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.855771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.855801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.856246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.856275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.856603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.856633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.857000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.857031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.857283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.857311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.857663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.857692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.858071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.858101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.858467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.858495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.858850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.858882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.859243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.859273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.859643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.859671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.860036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.860066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.860430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.860459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.860896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.860927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.861284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.861314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.861681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.861711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.862069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.862099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.862348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.862377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.862731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.862784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.863168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.863197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.863569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.863598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.863898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.863928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.864293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.864322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.864704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.864733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.864971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.865000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.865379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.865408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.865768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.865799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.866189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.866217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.866634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.866663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.867028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.867059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.867430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.867459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.867822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.867854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.867988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.868019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.868388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.868417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.868779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.868808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.869183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.869211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.869564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.869593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.269 [2024-11-06 13:26:23.869851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.269 [2024-11-06 13:26:23.869880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.269 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.870234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.870262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.870624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.870654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.870914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.870944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.871315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.871344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.871723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.871765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.872133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.872161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.872527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.872557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.872928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.872959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.873370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.873399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.873772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.873802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.874164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.874193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.874537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.874573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.874943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.874974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.875326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.875354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.875696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.875725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.876051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.876080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.876423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.876452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.876768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.876799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.877051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.877084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.877457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.877486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.877865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.877894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.878251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.878281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.878705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.878734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.879111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.879142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.879500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.879529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.879923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.879954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.880323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.880351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.880718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.880757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.881001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.881034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.881312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.881341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.881714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.881742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.882126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.882155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.882527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.882555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.882922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.882953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.883301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.883331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.883706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.883734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.884147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.884176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.884529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.884559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.884919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.884949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.885316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.885345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.885585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.885614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.886003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.886032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.886402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.886431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.886876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.886906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.887242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.887270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.887628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.887656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.887994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.888035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.888371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.888400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.888773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.888804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.889026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.889057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.889397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.889425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.889590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.889624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.889855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.889888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.890244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.890273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.890661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.890691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.891051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.891081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.891441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.891470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.891907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.891938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.892301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.892330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.892591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.892619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.892949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.892978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.270 qpair failed and we were unable to recover it. 00:29:42.270 [2024-11-06 13:26:23.893347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.270 [2024-11-06 13:26:23.893377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.893736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.893775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.894109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.894138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.894507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.894536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.894899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.894929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.895178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.895206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.895549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.895577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.895959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.895989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.896244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.896272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.896622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.896651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.896899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.896929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.897333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.897362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.897728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.897767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.898018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.898047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.898404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.898433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.898808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.898839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.899197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.899227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.899613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.899643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.899982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.900013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.900380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.900408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.900771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.900802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.901154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.901182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.901544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.901572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.901818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.901849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.902212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.902242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.902595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.902624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.903010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.903041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.903284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.903315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.903687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.903716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.904124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.904155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.904509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.904544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.904668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.904699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.904986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.905016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.905384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.905414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.905855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.905885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.906245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.906275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.906656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.906685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.907069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.907099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.907453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.907482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.907837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.907867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.908311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.908340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.908667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.908697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.908856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.908889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.909267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.909296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.909657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.909687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.910056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.910087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.910469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.910497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.910861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.910891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.911260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.911288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.911658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.911687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.912065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.912094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.912483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.912512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.912871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.912900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.913274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.913304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.913671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.913701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.914126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.914156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.914485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.914515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.914872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.914903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.915278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.915308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.915671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.915700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.916071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.916102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.916453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.916482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.916845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.916875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.917137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.917169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.917524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.917553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.271 [2024-11-06 13:26:23.917897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.271 [2024-11-06 13:26:23.917927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.271 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.918294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.918323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.918687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.918715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.919097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.919128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.919462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.919499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.919890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.919927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.920303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.920332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.920703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.920731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.921118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.921147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.921512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.921543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.921904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.921934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.922298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.922327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.922687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.922716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.923086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.923117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.923479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.923508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.923880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.923910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.924164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.924193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.924434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.924465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.924736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.924775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.925168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.925197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.925554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.925583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.925867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.925897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.926259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.926287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.926657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.926686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.927061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.927090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.927453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.927482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.927832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.927862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.928112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.928140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.928504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.928532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.928900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.928930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.929292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.929320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.929675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.929703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.930208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.930240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.930601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.930630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.930900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.930930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.931308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.931337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.931685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.931714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.932081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.932111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.932472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.932500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.932920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.932950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.933317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.933345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.933696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.933724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.934084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.934113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.934361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.934393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.934765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.934795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.935155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.935190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.935604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.935634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.935986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.936016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.936356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.936386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.936743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.936785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.937148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.937176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.937593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.937622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.937985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.938017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.938256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.938288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.938647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.938677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.939041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.939073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.939424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.939454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.939811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.939842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.940203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.940233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.940489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.940523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.940895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.940927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.941270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.941300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.941652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.272 [2024-11-06 13:26:23.941681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.272 qpair failed and we were unable to recover it. 00:29:42.272 [2024-11-06 13:26:23.942083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.942114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.942464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.942494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.942868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.942900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.943260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.943291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.943637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.943667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.943905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.943940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.944291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.944322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.944677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.944707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.945057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.945088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.945443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.945473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.945833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.945865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.946276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.946306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.946659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.946690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.947113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.947145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.947400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.947429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.947807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.947839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.948078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.948112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.948372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.948405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.948791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.948823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.949178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.949208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.949574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.949605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.949995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.950027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.950387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.950424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.950777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.950809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.951173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.951204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.951562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.951592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.951975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.952005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.952365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.952396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.952767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.952797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.953145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.953174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.953615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.953644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.953849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.953883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.954238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.954267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.954532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.954561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.954918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.954949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.955301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.955330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.955690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.955720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.956103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.956134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.956467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.956495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.956863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.956894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.957314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.957343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.957773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.957804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.958160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.958189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.958549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.958579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.958937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.958967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.959353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.959383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.959736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.959781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.960082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.960111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.960465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.960494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.960768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.960798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.961147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.961176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.961550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.961579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.961967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.961999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.962359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.962397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.962755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.962785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.963182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.963212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.963575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.963604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.963988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.964019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.964384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.964415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.964786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.964817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.965209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.965238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.965417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.965445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.965838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.965875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.966253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.966283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.966538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.966567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.966849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.273 [2024-11-06 13:26:23.966879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.273 qpair failed and we were unable to recover it. 00:29:42.273 [2024-11-06 13:26:23.967248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.967277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.967587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.967616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.967977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.968007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.968370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.968401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.968779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.968809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.969177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.969206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.969566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.969595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.969970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.970001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.970336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.970365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.970617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.970648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.970999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.971032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.971357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.971385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.971779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.971810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.972148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.972177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.972539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.972568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.972940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.972972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.973341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.973370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.973796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.973827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.974226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.974257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.974605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.974635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.974897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.974927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.975235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.975263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.975612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.975641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.976007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.976038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.976391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.976422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.976782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.976812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.977199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.977229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.977456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.977487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.977936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.977968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.978231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.978260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.978519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.978547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.978954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.978985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.979329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.979357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.979711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.979741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.980211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.980245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.980490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.980522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.980910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.980948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.981192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.981222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.981577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.981605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.982026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.982057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.982410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.982439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.982804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.982836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.983189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.983219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.983625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.983655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.983919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.983949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.984303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.984332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.984654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.984682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.985063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.985095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.985334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.985366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.985722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.985765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.986160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.986192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.986564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.986593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.986948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.986979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.987381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.987410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.987785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.987815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.988187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.988216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.988588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.988616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.988970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.989000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.989290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.989319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.989559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.989587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.989926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.989956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.990326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.990355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.990714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.990742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.991128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.991158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.991530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.274 [2024-11-06 13:26:23.991558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.274 qpair failed and we were unable to recover it. 00:29:42.274 [2024-11-06 13:26:23.991950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.991982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.992353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.992381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.992760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.992790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.993156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.993185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.993551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.993579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.993832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.993862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.994227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.994256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.994626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.994655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.995001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.995031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.995385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.995415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.995780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.995810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.996074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.996113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.996511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.996540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.996902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.996933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.997285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.997314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.997567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.997595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.997962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.997992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.998351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.998381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.998820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.998850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.999221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.999250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.999612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:23.999640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:23.999989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.000020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.000362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.000392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.000692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.000721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.001131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.001162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.001506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.001536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.001911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.001941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.002300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.002330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.002694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.002722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.002991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.003024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.003387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.003416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.003727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.003776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.004132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.004161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.004372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.004400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.004775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.004806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.005164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.005192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.005421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.005452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.005690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.005722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.006074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.006103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.006476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.006505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.006864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.006895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.007251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.007280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.007641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.007671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.008063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.008093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.008446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.008474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.008847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.008878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.009251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.009280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.009530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.009562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.009903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.009933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.010304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.010333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.010704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.010733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.011133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.011171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.011500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.011528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.011893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.011924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.012284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.012313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.012658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.012687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.013038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.013068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.013421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.013450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.013828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.013859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.014204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.014234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.014564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.014592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.014856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.014886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.015129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.015157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.015532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.015561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.015915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.015946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.016304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.016333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.016699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.016727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.017132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.017161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.017531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.017561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.017915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.017946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.018304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.018333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.018704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.275 [2024-11-06 13:26:24.018734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.275 qpair failed and we were unable to recover it. 00:29:42.275 [2024-11-06 13:26:24.019091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.019120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.019483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.019512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.019789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.019819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.020195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.020224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.020656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.020685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.021037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.021069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.021421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.021451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.021794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.021825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.022177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.022206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.022558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.022588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.022956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.022986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.023358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.023386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.023758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.023787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.024148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.024177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.024538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.024567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.024932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.024963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.025330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.025359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.025782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.025813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.026174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.026202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.026444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.026478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.026844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.026873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.027243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.027271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.027631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.027660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.028026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.028058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.028397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.028428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.028810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.028840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.029107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.029135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.029486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.029515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.029778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.029808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.030059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.030088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.030396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.030426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.030779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.030809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.031054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.031086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.031449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.031479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.031826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.031856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.032199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.032227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.032556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.032585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.032957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.032989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.033235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.033267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.033613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.033643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.034001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.034032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.034390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.034419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.034678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.034706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.034977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.035010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.035393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.035422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.035778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.035808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.036170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.036200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.036464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.036492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.036875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.036906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.037172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.037201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.037582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.037611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.037973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.038004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.038245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.038274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.038650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.038680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.039045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.039075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.039307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.039338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.039726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.039765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.040104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.040133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.040495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.040525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.040889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.040925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.041172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.041203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.041603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.041632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.041992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.042023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.042380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.042409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.042771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.042802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.043056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.043087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.043311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.043341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.043715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.043743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.044126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.044155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.044519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.044549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.044927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.044957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.276 qpair failed and we were unable to recover it. 00:29:42.276 [2024-11-06 13:26:24.045324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.276 [2024-11-06 13:26:24.045353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.045729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.045768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.046123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.046153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.046517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.046545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.046914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.046944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.047305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.047334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.047683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.047713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.048080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.048110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.048461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.048491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.048737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.048787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.049129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.049158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.049518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.049547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.049845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.049876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.050227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.050255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.050636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.050664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.051033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.051064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.051422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.051451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.051818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.051848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.052217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.052246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.052599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.052627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.052954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.052984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.053354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.053384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.053646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.053675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.054026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.054056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.054415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.054444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.054803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.054833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.055206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.055235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.055604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.055633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.055999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.056029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.056391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.056421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.056789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.056820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.057194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.057222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.057631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.057660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.058029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.058059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.058438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.058466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.058849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.058879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.059250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.059279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.059645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.059673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.060037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.060067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.060430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.060459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.060806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.060836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.061232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.061261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.061621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.061650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.062006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.062035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.062400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.062429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.062788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.062819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.063184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.063213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.063456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.063488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.063738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.063778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.064122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.064152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.064525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.064554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.064917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.064948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.065303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.065332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.065644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.065673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.065965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.065996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.066236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.066273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.066632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.066662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.067046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.067077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.067492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.067522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.067876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.067907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.068249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.068279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.068649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.068678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.069046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.069078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.069445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.069474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.069778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.069809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.070160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.070189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.070527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.277 [2024-11-06 13:26:24.070556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.277 qpair failed and we were unable to recover it. 00:29:42.277 [2024-11-06 13:26:24.070917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.070948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.071388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.071417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.071637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.071668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.072035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.072066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.072411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.072442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.072781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.072811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.073172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.073202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.073563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.073592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.073931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.073961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.074374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.074403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.074765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.074795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.075142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.075171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.075567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.075596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.075962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.075992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.076342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.076372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.076624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.076653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.076981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.077011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.077366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.077397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.077779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.077810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.078167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.078196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.078590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.078619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.078973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.079004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.079367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.079395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.079800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.079831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.080194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.080222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.080477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.080510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.080889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.080920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.081286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.081315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.081680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.081716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.082071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.082101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.082459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.082488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.082864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.082894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.083256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.083287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.083623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.083652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.084040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.084070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.084431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.084460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.084820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.084850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.085192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.085220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.085593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.085622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.085971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.086003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.086237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.086266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.086633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.086663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.086923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.086953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.087211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.087242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.087559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.087588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.087959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.087989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.088351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.088380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.088742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.088780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.089181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.089209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.089568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.089597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.089946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.089976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.090333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.090361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.090723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.090762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.091118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.091147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.091381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.091411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.091790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.091821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.092192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.092221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.092575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.092603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.092980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.093010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.093371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.093400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.093767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.093797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.094166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.094195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.094646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.094675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.094994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.095024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.095432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.095461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.095868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.095898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.096252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.096282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.096635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.096664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.097040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.097075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.097425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.097454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.097811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.097842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.098003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.098034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.278 [2024-11-06 13:26:24.098413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.278 [2024-11-06 13:26:24.098442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.278 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.098811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.098842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.099176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.099205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.099620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.099648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.099990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.100020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.100351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.100381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.100738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.100777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.101110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.101139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.101511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.101540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.101907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.101945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.102274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.102304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.102539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.102567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.102917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.102947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.103316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.103344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.103600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.103628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.103964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.103995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.104346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.104377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.104784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.104814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.105177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.105206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.105563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.105592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.105950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.105980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.106239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.106268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.106596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.106624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.106989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.107020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.107381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.107409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.107810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.107841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.108171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.108200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.108563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.108592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.108962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.108992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.109339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.109368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.109730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.109768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.110023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.110052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.110445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.110474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.110896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.110925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.111289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.111318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.111681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.111711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.112078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.112114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.112473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.112502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.112764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.112795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.113072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.113100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.113453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.113482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.113701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.113733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.113991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.114022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.114389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.114418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.114769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.114799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.115148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.115177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.115535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.115564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.115939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.115968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.116334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.116364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.116801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.116832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.117246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.117276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.117624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.117652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.118021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.118052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.118405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.118436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.118700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.118730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.119097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.119127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.119517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.119547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.119897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.119928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.120269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.120299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.120654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.120683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.121056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.121087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.121447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.121477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.121897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.121927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.122286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.122315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.122680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.122709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.123080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.123111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.123478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.123506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.279 qpair failed and we were unable to recover it. 00:29:42.279 [2024-11-06 13:26:24.123872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.279 [2024-11-06 13:26:24.123902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.124272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.124301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.124549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.124581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.124941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.124972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.125337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.125365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.125720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.125768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.126127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.126157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.126522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.126551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.127011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.127041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.127394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.127430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.127780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.127811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.128250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.128280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.128632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.128662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.129005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.129035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.129395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.129425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.129791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.129822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.130185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.130213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.130567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.130595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.130958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.130989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.131339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.131368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.131730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.131769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.132133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.132162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.132595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.132624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.132885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.132916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.133157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.133188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.133550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.133579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.133957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.133988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.134340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.134371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.134769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.134800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.135177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.135206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.135567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.135595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.135969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.135999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.136364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.136392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.136765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.136797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.137163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.137193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.137501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.137529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.137874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.137905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.138287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.138316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.138548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.138576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.138800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.138832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.139172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.139202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.139547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.139577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.140012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.140042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.140406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.140435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.140786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.140816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.141064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.141096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.141474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.141503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.141859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.141891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.142271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.142300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.142662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.142697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.143052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.143081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.143466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.143495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.143852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.143883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.144127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.144156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.144504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.144534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.144914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.144945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.145305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.145335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.145700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.145730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.145955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.145987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.146346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.146376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.146742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.146794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.147141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.147170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.147523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.147552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.147922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.147954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.148200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.148231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.148578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.148608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.148856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.148886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.149246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.149275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.149631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.149660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.150006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.150036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.150474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.150503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.150872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.280 [2024-11-06 13:26:24.150903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.280 qpair failed and we were unable to recover it. 00:29:42.280 [2024-11-06 13:26:24.151263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.281 [2024-11-06 13:26:24.151291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.281 qpair failed and we were unable to recover it. 00:29:42.554 [2024-11-06 13:26:24.151634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-11-06 13:26:24.151667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.554 qpair failed and we were unable to recover it. 00:29:42.554 [2024-11-06 13:26:24.152020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-11-06 13:26:24.152051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.554 qpair failed and we were unable to recover it. 00:29:42.554 [2024-11-06 13:26:24.152411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-11-06 13:26:24.152440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.554 qpair failed and we were unable to recover it. 00:29:42.554 [2024-11-06 13:26:24.152885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-11-06 13:26:24.152917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.554 qpair failed and we were unable to recover it. 00:29:42.554 [2024-11-06 13:26:24.153278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-11-06 13:26:24.153307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.554 qpair failed and we were unable to recover it. 00:29:42.554 [2024-11-06 13:26:24.153670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.554 [2024-11-06 13:26:24.153699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.554 qpair failed and we were unable to recover it. 00:29:42.554 [2024-11-06 13:26:24.154070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.154100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.154329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.154361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.154771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.154803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.155163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.155193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.155557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.155588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.155858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.155888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.156259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.156290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.156541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.156574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.156949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.156979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.157311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.157339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.157510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.157545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.157862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.157893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.158244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.158272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.158649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.158677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.159053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.159083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.159463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.159492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.159901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.159932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.160291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.160319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.160687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.160717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.161089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.161119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.161396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.161428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.161678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.161707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.162098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.162128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.162495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.162527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.162891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.162922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.163296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.163326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.163688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.163717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.163950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.163983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.164333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.164362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.164730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.164772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.165128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.165156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.165526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.165555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.165972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.166002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.166380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.166408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.166771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.166802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.167199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.167228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.167592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.167621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.167990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.555 [2024-11-06 13:26:24.168021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.555 qpair failed and we were unable to recover it. 00:29:42.555 [2024-11-06 13:26:24.168392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.168421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.168764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.168794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.169052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.169084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.169461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.169490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.169852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.169883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.170249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.170278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.170645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.170674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.171032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.171063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.171445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.171475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.171836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.171867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.172243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.172272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.172629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.172658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.173014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.173051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.173323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.173351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.173711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.173739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.174087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.174117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.174396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.174424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.174776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.174808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.175162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.175191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.175298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.175328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.175715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.175754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.176094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.176124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.176484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.176513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.176885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.176915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.177278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.177307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.177704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.177733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.178153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.178183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.178412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.178441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.178809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.178840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.179202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.179231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.179602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.179631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.179976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.180008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.180173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.180206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.180655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.180684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.181010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.181041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.181417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.181446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.181803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.181833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.182189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.556 [2024-11-06 13:26:24.182217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.556 qpair failed and we were unable to recover it. 00:29:42.556 [2024-11-06 13:26:24.182576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.182605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.182976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.183009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.183378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.183407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.183774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.183806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.184179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.184209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.184565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.184595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.184977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.185007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.185258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.185287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.185633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.185662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.186013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.186045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.186378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.186407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.186777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.186808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.187173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.187202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.187485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.187514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.187871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.187917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.188286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.188320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.188678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.188707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.189070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.189101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.189479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.189507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.189805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.189837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.190239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.190269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.190636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.190665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.191072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.191103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.191455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.191483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.191851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.191882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.192237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.192275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.192659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.192690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.193059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.193090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.193451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.193481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.193850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.193881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.194230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.194259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.194618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.194648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.195013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.195044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.195472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.195503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.195724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.195769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.196124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.196153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.196562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.196592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.557 [2024-11-06 13:26:24.196940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.557 [2024-11-06 13:26:24.196970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.557 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.197335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.197364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.197723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.197771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.198107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.198137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.198501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.198532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.198877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.198907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.199253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.199283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.199647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.199676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.200038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.200070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.200431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.200460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.200827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.200858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.201290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.201320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.201668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.201696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.202067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.202098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.202462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.202494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.202878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.202910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.203274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.203305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.203672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.203707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.203959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.203993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.204359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.204388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.204737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.204778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.205132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.205164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.205525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.205555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.205906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.205937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.206336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.206366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.206720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.206764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.207135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.207165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.207528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.207559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.207810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.207842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.208201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.208231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.208569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.208601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.208952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.208985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.209341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.209372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.209643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.209674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.210033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.210062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.210415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.210445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.210706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.210736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.211116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.211146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.211521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.558 [2024-11-06 13:26:24.211551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.558 qpair failed and we were unable to recover it. 00:29:42.558 [2024-11-06 13:26:24.211794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.211828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.212180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.212210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.212552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.212583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.212884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.212914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.213295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.213325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.213694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.213727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.214120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.214151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.214523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.214553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.214834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.214864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.215220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.215250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.215635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.215665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.216034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.216066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.216421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.216451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.216892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.216924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.217312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.217343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.217712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.217743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.218153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.218183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.218551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.218582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.218837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.218892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.219263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.219295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.219651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.219683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.220037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.220069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.220422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.220451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.220822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.220853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.221231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.221262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.221555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.221586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.221975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.222004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.222375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.222404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.222769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.222798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.223159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.223187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.223542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.223571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.223923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.223954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.224318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.559 [2024-11-06 13:26:24.224349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.559 qpair failed and we were unable to recover it. 00:29:42.559 [2024-11-06 13:26:24.224711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.224740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.225080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.225109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.225511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.225541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.225958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.225988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.226345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.226374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.226626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.226655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.227025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.227056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.227415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.227445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.227813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.227845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.228185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.228215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.228552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.228582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.228879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.228910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.229277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.229308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.229669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.229700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.229935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.229966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.230234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.230269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.230618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.230649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.231004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.231036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.231302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.231334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.231688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.231720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.232081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.232113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.232368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.232398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.232765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.232797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.233147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.233178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.233554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.233585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.233955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.233994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.234375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.234407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.234774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.234807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.235037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.235072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.235426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.235456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.235805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.235836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.236186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.236216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.236568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.236599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.236960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.236990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.237352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.237382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.237738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.237799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.238174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.238203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.238566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.560 [2024-11-06 13:26:24.238595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.560 qpair failed and we were unable to recover it. 00:29:42.560 [2024-11-06 13:26:24.239072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.239102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.239455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.239485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.239736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.239779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.240063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.240092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.240441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.240470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.240830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.240861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.241227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.241255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.241620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.241649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.241908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.241942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.242333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.242363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.242717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.242757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.242870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.242902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.243294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.243323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.243681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.243711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.243968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.244004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.244390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.244420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.244787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.244818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.245177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.245206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.245471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.245500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.245850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.245881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.246248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.246277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.246642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.246672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.247015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.247046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.247407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.247436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.247688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.247717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.247966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.247996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.248449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.248479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.248831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.248868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.249234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.249265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.249629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.249658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.250023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.250053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.250433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.250462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.250820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.250851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.251258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.251287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.251533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.251561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.251930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.251960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.252324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.252353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.252704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.561 [2024-11-06 13:26:24.252732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.561 qpair failed and we were unable to recover it. 00:29:42.561 [2024-11-06 13:26:24.253093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.253123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.253484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.253514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.253879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.253910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.254292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.254323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.254684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.254713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.254991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.255021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.255434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.255464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.255635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.255668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.256026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.256055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.256425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.256455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.256827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.256859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.257221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.257253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.257615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.257645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.258006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.258037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.258402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.258431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.258705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.258733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.259112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.259153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.259494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.259532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.259908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.259937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.260289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.260318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.260679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.260708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.260977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.261007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.261384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.261412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.261782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.261814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.262214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.262243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.262532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.262561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.262930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.262960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.263322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.263351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.263732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.263773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.264123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.264152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.264452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.264482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.264822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.264854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.265205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.265234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.265593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.265622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.265989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.266021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.266390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.266419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.266772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.266802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.267154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.562 [2024-11-06 13:26:24.267184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.562 qpair failed and we were unable to recover it. 00:29:42.562 [2024-11-06 13:26:24.267547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.267576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.267797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.267830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.268197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.268227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.268572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.268601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.268981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.269012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.269376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.269406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.269778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.269810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.270169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.270200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.270565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.270595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.270955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.270986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.271322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.271352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.271710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.271740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.272107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.272137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.272494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.272525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.272861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.272892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.273240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.273269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.273568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.273598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.273955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.273987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.274345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.274380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.274772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.274803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.275153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.275183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.275425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.275454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.275812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.275843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.276202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.276232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.276597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.276626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.276994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.277024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.277384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.277414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.277785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.277818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.278167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.278197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.278554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.278584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.278947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.278978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.279340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.279369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.279736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.279777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.280160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.563 [2024-11-06 13:26:24.280190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.563 qpair failed and we were unable to recover it. 00:29:42.563 [2024-11-06 13:26:24.280439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.280469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.280822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.280853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.281221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.281250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.281615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.281645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.282016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.282047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.282483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.282512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.282878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.282908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.283272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.283302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.283663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.283693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.284036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.284069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.284454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.284484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.284842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.284873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.285238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.285268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.285633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.285663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.286006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.286037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.286392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.286422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.286814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.286844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.287115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.287143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.287432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.287461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.287703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.287736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.288107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.288137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.288369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.288400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.288770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.288801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.289167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.289197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.289574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.289610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.289988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.290019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.290378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.290406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.290845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.290875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.291259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.291289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.291653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.291681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.292044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.292075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.292406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.292435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.292800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.292830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.293193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.293222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.293587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.293616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.293871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.293902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.294323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.294352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.294714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.294744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.564 [2024-11-06 13:26:24.295150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.564 [2024-11-06 13:26:24.295179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.564 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.295541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.295569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.295940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.295970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.296341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.296369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.296735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.296773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.297166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.297195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.297564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.297594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.298016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.298046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.298400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.298429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.298798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.298828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.299206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.299235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.299601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.299630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.299981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.300011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.300387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.300417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.300647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.300675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.301058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.301091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.301328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.301360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.301723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.301766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.302116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.302147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.302504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.302535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.302918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.302949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.303179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.303210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.303580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.303609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.304021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.304051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.304413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.304441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.304800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.304831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.305195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.305232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.305585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.305614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.305949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.305979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.306342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.306372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.306738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.306784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.307140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.307170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.307536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.307567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.307927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.307958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.308222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.308251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.308596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.308625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.308886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.308918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.309280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.309310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.309674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.565 [2024-11-06 13:26:24.309705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.565 qpair failed and we were unable to recover it. 00:29:42.565 [2024-11-06 13:26:24.309987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.310017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.310267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.310301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.310654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.310684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.311083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.311115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.311476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.311507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.311869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.311900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.312264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.312293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.312659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.312687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.313052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.313082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.313443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.313472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.313727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.313769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.314161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.314190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.314555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.314584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.314846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.314877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.315256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.315285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.315626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.315655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.316003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.316035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.316404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.316434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.316805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.316836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.317198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.317227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.317589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.317619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.317995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.318025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.318386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.318418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.318775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.318806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.319155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.319193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.319480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.319510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.319762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.319794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.320183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.320218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.320507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.320536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.320895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.320927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.321328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.321358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.321717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.321758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.322099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.322129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.322493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.322521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.322881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.322911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.323309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.323339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.323701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.323730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.324092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.566 [2024-11-06 13:26:24.324122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.566 qpair failed and we were unable to recover it. 00:29:42.566 [2024-11-06 13:26:24.324486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.324514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.324993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.325023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.325396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.325427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.325673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.325704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.326133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.326163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.326531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.326559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.326937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.326968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.327199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.327231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.327589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.327618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.328044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.328075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.328488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.328518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.328889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.328919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.329285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.329314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.329566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.329595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.329864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.329897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.330286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.330315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.330688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.330718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.331102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.331133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.331492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.331523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.331879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.331909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.332297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.332326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.332714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.332742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.333126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.333155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.333499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.333527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.333872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.333904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.334133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.334165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.334517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.334546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.334896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.334926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.335232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.335262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.335618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.335660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.336008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.336038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.336421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.336451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.336625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.336658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.336975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.337005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.337237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.337266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.337598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.337627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.337891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.337922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.338282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.338312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.567 [2024-11-06 13:26:24.338688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.567 [2024-11-06 13:26:24.338717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.567 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.338958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.338992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.339288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.339317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.339667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.339696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.340062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.340093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.340430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.340461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.340829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.340860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.341229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.341259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.341660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.341690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.342109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.342140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.342496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.342524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.342871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.342903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.343220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.343249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.343606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.343635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.344015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.344047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.344417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.344446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.344779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.344809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.345259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.345288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.345626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.345656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.346024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.346053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.346484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.346512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.346726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.346770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.347156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.347186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.347526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.347555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.347928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.347959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.348301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.348330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.348699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.348728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.349003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.349032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.349383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.349413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.349764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.349795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.350063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.350095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.350465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.350501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.350862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.350894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.351183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.351213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.351599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.351627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.568 qpair failed and we were unable to recover it. 00:29:42.568 [2024-11-06 13:26:24.351864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.568 [2024-11-06 13:26:24.351898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.352280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.352311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.352685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.352714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.353092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.353122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.353463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.353492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.353832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.353863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.354232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.354263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.354655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.354685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.355019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.355049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.355416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.355445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.355812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.355844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.356221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.356251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.356616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.356647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.356925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.356955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.357319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.357357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.357604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.357636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.357979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.358009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.358252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.358284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.358653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.358683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.359034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.359065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.359353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.359382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.359605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.359637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.360003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.360035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.360396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.360426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.360787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.360818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.361207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.361235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.361594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.361622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.361974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.362004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.362386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.362414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.362778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.362810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.363170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.363198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.363570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.363600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.363842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.363876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.364234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.364265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.364638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.364667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.364997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.365028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.365399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.365435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.365800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.365831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.569 [2024-11-06 13:26:24.366210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.569 [2024-11-06 13:26:24.366239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.569 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.366601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.366630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.367000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.367031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.367390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.367419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.367795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.367825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.368255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.368283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.368612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.368641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.368983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.369013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.369367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.369396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.369804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.369836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.370201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.370239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.370573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.370602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.370937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.370968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.371316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.371345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.371594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.371626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.372003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.372035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.372390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.372419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.372793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.372823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.373063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.373094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.373440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.373468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.373817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.373848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.374232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.374260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.374688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.374717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.374974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.375003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.375357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.375386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.375739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.375783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.376106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.376136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.376486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.376516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.376777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.376808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.377159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.377188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.377565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.377594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.377928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.377957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.378305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.378333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.378696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.378726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.379107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.379139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.379511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.379541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.379899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.379930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.380277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.380308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.570 [2024-11-06 13:26:24.380683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.570 [2024-11-06 13:26:24.380718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.570 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.381102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.381132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.381497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.381526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.381888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.381919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.382293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.382323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.382726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.382765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.383087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.383116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.383483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.383513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.383879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.383910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.384278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.384307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.384650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.384680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.385041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.385071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.385431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.385461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.385818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.385848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.386256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.386286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.386671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.386700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.386940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.386970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.387307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.387378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.387698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.387728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.388130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.388162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.388521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.388551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.388919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.388949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.389197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.389229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.389601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.389631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.390014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.390046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.390402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.390431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.390800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.390831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.391296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.391326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.391662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.391692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.392058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.392088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.392431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.392461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.392796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.392827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.393234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.393263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.393619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.393647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.394079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.394109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.394466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.394495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.394856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.394887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.395285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.395314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.395686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.571 [2024-11-06 13:26:24.395715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.571 qpair failed and we were unable to recover it. 00:29:42.571 [2024-11-06 13:26:24.396109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.396148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.396493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.396530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.396883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.396915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.397299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.397328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.397697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.397725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.398010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.398041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.398397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.398426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.398792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.398823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.399217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.399247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.399597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.399636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.399982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.400013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.400373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.400402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.400765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.400795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.401156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.401185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.401550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.401580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.401827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.401860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.402237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.402266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.402620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.402649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.403016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.403047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.403412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.403442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.403829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.403860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.404215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.404245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.404619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.404647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.404908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.404937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.405276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.405305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.405671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.405701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.406011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.406041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.406169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.406200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.406588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.406619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.406984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.407015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.407387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.407415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.407781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.407810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.408194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.408223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.408579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.408608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.408957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.572 [2024-11-06 13:26:24.408987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.572 qpair failed and we were unable to recover it. 00:29:42.572 [2024-11-06 13:26:24.409351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.409380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.409630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.409662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.409952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.409982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.410367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.410396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.410614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.410642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.410962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.410993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.411370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.411411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.411793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.411824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.412219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.412250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.412623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.412654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.412915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.412949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.413302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.413332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.413623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.413652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.414021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.414052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.414422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.414452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.414825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.414857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.415212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.415244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.415655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.415684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.416072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.416104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.416338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.416369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.416733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.416776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.417109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.417138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.417500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.417529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.417780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.417809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.418165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.418194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.418550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.418579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.418957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.418987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.419351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.419379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.419763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.419793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.420133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.420163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.420502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.420532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.420870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.420900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.421243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.421280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.421653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.421684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.422102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.422132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.422484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.422518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.422850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.422881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.423247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.573 [2024-11-06 13:26:24.423277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.573 qpair failed and we were unable to recover it. 00:29:42.573 [2024-11-06 13:26:24.423519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.423552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.423908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.423938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.424307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.424336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.424563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.424591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.424944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.424976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.425328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.425356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.425611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.425644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.425993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.426023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.426404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.426441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.426800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.426832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.427266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.427294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.427633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.427663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.428033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.428063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.428321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.428351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.428739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.428779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.429065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.429093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.429344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.429375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.429616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.429648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.430035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.430066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.430381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.430409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.430768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.430799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.431174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.431203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.431560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.431588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.431935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.431966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.432332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.432362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.432725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.432764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.433109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.433139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.433504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.433534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.433790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.433823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.434190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.434227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.434557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.434585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.434947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.434978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.435340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.435370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.435742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.435787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.436219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.436248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.436611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.436640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.436980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.437018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.437255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.437286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.574 [2024-11-06 13:26:24.437655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.574 [2024-11-06 13:26:24.437686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.574 qpair failed and we were unable to recover it. 00:29:42.575 [2024-11-06 13:26:24.438120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.575 [2024-11-06 13:26:24.438150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.575 qpair failed and we were unable to recover it. 00:29:42.575 [2024-11-06 13:26:24.438499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.575 [2024-11-06 13:26:24.438527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.575 qpair failed and we were unable to recover it. 00:29:42.575 [2024-11-06 13:26:24.438772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.575 [2024-11-06 13:26:24.438803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.575 qpair failed and we were unable to recover it. 00:29:42.575 [2024-11-06 13:26:24.439174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.575 [2024-11-06 13:26:24.439204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.575 qpair failed and we were unable to recover it. 00:29:42.575 [2024-11-06 13:26:24.439560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.575 [2024-11-06 13:26:24.439590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.575 qpair failed and we were unable to recover it. 00:29:42.575 [2024-11-06 13:26:24.439968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.575 [2024-11-06 13:26:24.439999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.575 qpair failed and we were unable to recover it. 00:29:42.575 [2024-11-06 13:26:24.440356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.575 [2024-11-06 13:26:24.440385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.575 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.440761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.440794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.441034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.441064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.441299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.441336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.441694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.441724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.442092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.442124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.442486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.442515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.442878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.442917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.443285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.443335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.443778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.443835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.444231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.444282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.444575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.444623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.445023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.445060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.445431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.445460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.445722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.445766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.446120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.446149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.446532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.446561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.446816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.446848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.447216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.447246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.447623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.447652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.448057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.448088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.448448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.448477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.448825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.448857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.848 [2024-11-06 13:26:24.449260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.848 [2024-11-06 13:26:24.449290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.848 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.449641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.449672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.450096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.450128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.450484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.450516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.450883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.450914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.451273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.451305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.451549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.451580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.451928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.451965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.452217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.452251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.452492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.452522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.452778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.452810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.453167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.453196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.453561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.453591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.453975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.454007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.454350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.454381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.454739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.454805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.455155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.455187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.455533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.455562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.455910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.455941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.456315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.456344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.456708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.456737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.457143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.457174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.457405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.457436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.457800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.457832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.458112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.458143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.458489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.458521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.458875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.458906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.459283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.459312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.459567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.459598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.459925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.459956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.460305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.460336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.460569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.460602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.460948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.460978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.461266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.461297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.461655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.461684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.462130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.462162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.462402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.462431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.462802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.462840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.849 [2024-11-06 13:26:24.463200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.849 [2024-11-06 13:26:24.463229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.849 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.463611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.463640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.464013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.464045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.464413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.464442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.464810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.464840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.465084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.465114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.465469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.465498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.465869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.465899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.466311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.466340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.466779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.466818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.467193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.467221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.467584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.467615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.467997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.468029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.468387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.468416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.468662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.468695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.469115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.469147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.469404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.469433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.469801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.469830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.470246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.470278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.470455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.470484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.470848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.470881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.471228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.471256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.471627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.471657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.472038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.472070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.472428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.472459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.472719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.472762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.473154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.473184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.473605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.473634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.474024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.474056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.474405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.474435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.474785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.474816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.475074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.475103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.475496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.475525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.475870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.475900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.476260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.476289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.476660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.476690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.477096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.477127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.477426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.477455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.850 qpair failed and we were unable to recover it. 00:29:42.850 [2024-11-06 13:26:24.477821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.850 [2024-11-06 13:26:24.477851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.478187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.478225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.478573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.478602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.478882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.478912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.479286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.479318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.479676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.479707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.480153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.480184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.480557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.480587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.480944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.480974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.481330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.481361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.481730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.481773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.482153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.482190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.482564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.482594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.482872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.482904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.483298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.483327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.483669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.483701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.484069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.484100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.484463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.484494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.484825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.484855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.485222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.485252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.485524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.485555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.485930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.485961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.486366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.486396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.486649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.486679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.486967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.486997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.487370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.487400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.487632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.487663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.488053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.488084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.488329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.488357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.488704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.488733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.489102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.489133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.489474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.489503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.489875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.489906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.490275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.490305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.490680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.490710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.491148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.491179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.491535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.491563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.491928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.491958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.851 [2024-11-06 13:26:24.492201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.851 [2024-11-06 13:26:24.492232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.851 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.492598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.492627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.492983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.493014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.493376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.493404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.493786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.493815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.494162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.494191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.494548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.494577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.494958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.494989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.495353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.495382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.495762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.495792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.496101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.496130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.496478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.496506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.496879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.496910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.497286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.497323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.497698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.497727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.498108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.498139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.498495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.498524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.498901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.498931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.499301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.499329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.499695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.499724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.499963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.499997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.500378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.500408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.500796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.500826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.501202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.501240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.501611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.501641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.501942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.501972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.502353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.502381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.502770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.502802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.503164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.503192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.503595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.503625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.503999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.504031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.504270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.504301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.504699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.852 [2024-11-06 13:26:24.504728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.852 qpair failed and we were unable to recover it. 00:29:42.852 [2024-11-06 13:26:24.505102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.505131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.505562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.505590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.505928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.505958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.506298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.506328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.506688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.506717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.506979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.507012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.507364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.507393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.507825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.507857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.508211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.508246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.508620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.508650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.509005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.509045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.509427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.509455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.509702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.509732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.510106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.510136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.510544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.510574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.510791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.510823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.511282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.511311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.511650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.511679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.512055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.512088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.512333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.512366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.512717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.512766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.513192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.513221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.513575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.513604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.513975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.514005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.514377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.514406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.514772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.514802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.515189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.515219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.515594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.515623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.516030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.516061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.516426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.516455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.516821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.516851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.517229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.517260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.517613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.517651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.518002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.518033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.518425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.518454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.518756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.518786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.519183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.519212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.519460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.853 [2024-11-06 13:26:24.519490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.853 qpair failed and we were unable to recover it. 00:29:42.853 [2024-11-06 13:26:24.519858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.519889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.520129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.520161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.520517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.520545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.520894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.520925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.521304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.521333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.521703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.521732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.522106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.522136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.522394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.522423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.522813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.522843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.523229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.523259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.523629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.523659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.524005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.524038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.524369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.524398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.524771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.524802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.525168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.525197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.525556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.525586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.525929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.525958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.526329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.526359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.526605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.526635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.526873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.526906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.527314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.527344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.527700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.527730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.528124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.528160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.528516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.528547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.528905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.528936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.529298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.529327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.529555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.529586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.529935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.529965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.530328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.530359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.530595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.530627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.530887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.530917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.531291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.531320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.531632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.531660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.532015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.532045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.532412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.532440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.532801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.532831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.533235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.533266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.533693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.854 [2024-11-06 13:26:24.533723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.854 qpair failed and we were unable to recover it. 00:29:42.854 [2024-11-06 13:26:24.534173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.534203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.534554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.534582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.534930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.534959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.535329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.535360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.535724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.535764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.536137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.536165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.536554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.536583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.536902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.536932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.537302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.537332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.537587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.537615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.537987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.538017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.538280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.538308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.538658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.538687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.539078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.539109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.539475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.539506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.539871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.539901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.540197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.540225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.540577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.540606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.540831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.540863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.541243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.541272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.541633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.541662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.542032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.542063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.542419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.542449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.542859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.542889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.543245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.543282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.543622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.543651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.544032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.544061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.544305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.544334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.544675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.544703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.545057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.545087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.545451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.545481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.545832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.545862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.546223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.546254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.546504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.546532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.546894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.546924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.547297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.547325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.547571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.547602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.547925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.547956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.855 [2024-11-06 13:26:24.548326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.855 [2024-11-06 13:26:24.548357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.855 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.548718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.548760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.549013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.549046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.549402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.549432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.549787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.549817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.550171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.550200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.550417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.550449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.550805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.550834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.551165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.551194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.551567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.551596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.551974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.552004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.552348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.552377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.552635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.552663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.553034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.553064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.553431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.553462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.553827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.553858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.554226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.554255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.554615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.554644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.554987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.555016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.555387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.555418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.555786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.555817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.556164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.556192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.556553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.556582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.556953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.556984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.557346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.557375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.557761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.557793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.558046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.558081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.558442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.558470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.558808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.558838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.559061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.559092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.559449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.559478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.559846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.559877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.560242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.560272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.560537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.560565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.560918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.560948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.561305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.561334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.561571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.561600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.561939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.561968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.562328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.856 [2024-11-06 13:26:24.562358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.856 qpair failed and we were unable to recover it. 00:29:42.856 [2024-11-06 13:26:24.562719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.562760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.563012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.563044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.563333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.563361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.563704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.563733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.564115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.564145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.564513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.564543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.564906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.564938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.565199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.565232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.565574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.565602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.565976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.566006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.566373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.566403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.566756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.566786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.567158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.567187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.567549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.567578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.567936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.567967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.568331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.568361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.568719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.568759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.569024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.569057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.569405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.569435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.569801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.569832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.570223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.570251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.570619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.570649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.571011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.571040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.571435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.571465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.571712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.571755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.572188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.572217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.572578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.572607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.572983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.573020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.573366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.573396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.573766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.573798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.574147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.574176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.574534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.574563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.574941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.857 [2024-11-06 13:26:24.574971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.857 qpair failed and we were unable to recover it. 00:29:42.857 [2024-11-06 13:26:24.575331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.575359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.575736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.575784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.576135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.576165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.576527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.576555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.576921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.576950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.577311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.577340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.577688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.577717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.578039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.578069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.578440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.578469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.578825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.578856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.579215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.579245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.579589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.579626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.579980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.580010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.580385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.580416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.580776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.580806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.581170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.581199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.581552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.581582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.581968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.581997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.582357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.582387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.582763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.582793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.583086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.583115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.583478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.583507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.583783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.583816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.584189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.584220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.584583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.584614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.584981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.585011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.585371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.585400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.585766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.585797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.586165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.586194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.586551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.586579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.586948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.586979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.587229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.587260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.587607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.587636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.588050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.588080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.588432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.588468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.588819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.588850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.589220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.589249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.589599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.858 [2024-11-06 13:26:24.589628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.858 qpair failed and we were unable to recover it. 00:29:42.858 [2024-11-06 13:26:24.589875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.589905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.590272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.590301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.590655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.590684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.591043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.591073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.591431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.591462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.591816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.591847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.592199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.592230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.592608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.592636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.592984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.593013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.593371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.593402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.593767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.593798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.594168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.594197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.594565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.594595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.594955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.594984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.595349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.595378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.595779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.595809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.596168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.596198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.596605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.596635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.597011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.597042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.597397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.597426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.597790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.597821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.598254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.598284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.598633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.598661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.599029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.599060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.599360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.599388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.599765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.599795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.600182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.600211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.600649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.600679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.600926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.600958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.601337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.601366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.601717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.601769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.602161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.602191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.602552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.602582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.602948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.602978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.603337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.603366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.603718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.603769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.604176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.604216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.604562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.604590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.859 [2024-11-06 13:26:24.604960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.859 [2024-11-06 13:26:24.604991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.859 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.605369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.605400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.605659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.605689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.606046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.606076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.606409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.606438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.606811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.606841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.607209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.607237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.607623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.607652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.608023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.608052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.608303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.608330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.608704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.608732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.609161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.609192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.609561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.609590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.609933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.609965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.610332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.610361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.610710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.610738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.611108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.611137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.611503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.611533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.611774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.611807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.612214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.612243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.612597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.612625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.612981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.613011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.613372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.613401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.613768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.613799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.614162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.614191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.614494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.614523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.614892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.614922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.615292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.615321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.615678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.615707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.616071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.616102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.616449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.616479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.616822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.616853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.617260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.617288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.617634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.617664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.618014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.618046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.618412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.618442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.618812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.618841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.619212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.619241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.860 [2024-11-06 13:26:24.619647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.860 [2024-11-06 13:26:24.619682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.860 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.620043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.620073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.620424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.620452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.620819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.620851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.621221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.621249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.621616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.621645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.622018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.622049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.622297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.622325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.622686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.622716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.623086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.623117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.623494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.623523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.623887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.623917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.624280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.624309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.624671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.624700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.625068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.625101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.625352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.625384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.625788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.625820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.626157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.626185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.626557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.626587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.626971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.627003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.627378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.627407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.627768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.627798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.628159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.628188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.628555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.628584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.628831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.628863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.629238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.629269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.629627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.629656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.630021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.630052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.630416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.630445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.630811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.630840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.631215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.631244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.631612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.631642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.632052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.632083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.632432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.632461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.632725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.632766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.633119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.633147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.633515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.861 [2024-11-06 13:26:24.633543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.861 qpair failed and we were unable to recover it. 00:29:42.861 [2024-11-06 13:26:24.633789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.633822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.634215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.634246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.634607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.634637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.635015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.635051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.635349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.635377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.635755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.635786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.636134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.636170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.636417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.636446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.636805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.636835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.637211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.637239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.637611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.637640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.638009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.638039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.638409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.638440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.638804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.638835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.639213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.639242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.639618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.639647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.640014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.640044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.640406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.640436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.640805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.640837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.641122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.641151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.641517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.641545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.641899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.641930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.642309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.642339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.642709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.642738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.643164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.643193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.643542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.643571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.643914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.643944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.644299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.644327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.644702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.644731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.645059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.645089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.645304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.645342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.645709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.645738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.646143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.646173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.646541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.646571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.646936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.646968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.647324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.862 [2024-11-06 13:26:24.647353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.862 qpair failed and we were unable to recover it. 00:29:42.862 [2024-11-06 13:26:24.647703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.647732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.648149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.648179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.648425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.648453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.648812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.648843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.649200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.649238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.649592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.649622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.649975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.650006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.650345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.650375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.650730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.650774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.651133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.651162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.651380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.651410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.651780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.651811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.652214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.652244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.652595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.652623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.652995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.653025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.653386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.653415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.653782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.653814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.654180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.654209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.654472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.654500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.654834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.654864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.655221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.655259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.655678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.655709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.656090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.656120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.656562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.656592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.656824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.656854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.657236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.657272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.657545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.657573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.657882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.657911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.658340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.658370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.658731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.658778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.659125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.659154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.659496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.659525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.659885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.659916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.660257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.660286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.660649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.660684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.661038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.661068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.661431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.661460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.661823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.661853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.863 qpair failed and we were unable to recover it. 00:29:42.863 [2024-11-06 13:26:24.662275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.863 [2024-11-06 13:26:24.662304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.662676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.662706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.663093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.663123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.663485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.663515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.663877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.663906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.664275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.664305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.664668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.664698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.665133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.665163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.665519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.665549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.665908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.665938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.666323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.666353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.666726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.666781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.667041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.667073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.667435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.667466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.667777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.667807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.668221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.668251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.668611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.668639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.669014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.669045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.669414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.669443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.669798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.669828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.670185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.670214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.670619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.670647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.670998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.671037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.671393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.671425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.671795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.671827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.672209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.672238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.672598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.672627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.672984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.673015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.673361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.673390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.673763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.673796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.674166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.674196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.674558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.674588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.674933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.674964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.675311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.675340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.675679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.675708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.675954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.675986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.676392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.676429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.676853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.676884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.864 [2024-11-06 13:26:24.677244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.864 [2024-11-06 13:26:24.677279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.864 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.677632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.677662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1917279 Killed "${NVMF_APP[@]}" "$@" 00:29:42.865 [2024-11-06 13:26:24.678060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.678091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.678457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.678488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:42.865 [2024-11-06 13:26:24.678866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.678898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:42.865 [2024-11-06 13:26:24.679264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.679293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:42.865 [2024-11-06 13:26:24.679571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.679600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.865 [2024-11-06 13:26:24.679975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.680005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.680356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.680385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.680772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.680804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.681233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.681263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.681630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.681659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.681911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.681942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.682210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.682239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.682610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.682639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.682983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.683014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.683390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.683420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.683759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.683790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.684138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.684169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.684408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.684440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.684814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.684844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.685130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.685159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.685514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.685549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.685827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.685858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.686256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.686285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.686538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.686566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.686930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.686961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.687221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.687250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.687505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.687534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.687906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 [2024-11-06 13:26:24.688311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.688341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1918154 00:29:42.865 [2024-11-06 13:26:24.688722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 [2024-11-06 13:26:24.688768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1918154 00:29:42.865 [2024-11-06 13:26:24.689165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:42.865 [2024-11-06 13:26:24.689196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1918154 ']' 00:29:42.865 [2024-11-06 13:26:24.689556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.865 [2024-11-06 13:26:24.689586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.865 qpair failed and we were unable to recover it. 00:29:42.865 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:42.865 [2024-11-06 13:26:24.689983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.690014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.866 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:42.866 [2024-11-06 13:26:24.690392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.690422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 13:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.866 [2024-11-06 13:26:24.690826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.690858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.691230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.691261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.691529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.691561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.691935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.691967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.692323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.692355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.692721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.692763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.693159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.693191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.693473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.693506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.693836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.693873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.694306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.694337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.694696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.694727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.695005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.695037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.695401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.695431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.695812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.695844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.696090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.696121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.696467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.696497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.696862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.696893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.697282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.697312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.697613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.697644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.698010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.698041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.698270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.698302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.698648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.698679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.699084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.699116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.699478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.699507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.699634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.699663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.699905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.699936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.700295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.700325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.700680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.700712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.701115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.701147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.866 qpair failed and we were unable to recover it. 00:29:42.866 [2024-11-06 13:26:24.701506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.866 [2024-11-06 13:26:24.701536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.701793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.701825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.702191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.702221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.702574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.702604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.702846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.702877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.703079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.703109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.703277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.703309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.703657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.703686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.704038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.704072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.704441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.704473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.704849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.704882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.705247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.705278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.705528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.705559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.705914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.705946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.706310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.706341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.706705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.706737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.707029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.707059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.707305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.707338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.707581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.707613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.708007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.708045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.708378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.708410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.708771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.708804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.709044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.709080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.709470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.709502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.709771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.709801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.710146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.710221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.710640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.710670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.711005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.711044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.711455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.711486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.711831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.711871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.712325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.712354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.712602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.712632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.712968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.713000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.713369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.713399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.713786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.713818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.714183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.714258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.714623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.714656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.715066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.715099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.867 [2024-11-06 13:26:24.715547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.867 [2024-11-06 13:26:24.715577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.867 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.715943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.715974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.716438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.716467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.716726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.716774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.717240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.717272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.717557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.717586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.717949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.717979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.718325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.718355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.718691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.718721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.719083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.719113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.719496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.719526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.719670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.719697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.720005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.720037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.720376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.720406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.720807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.720838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.721128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.721160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.721531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.721567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.721708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.721741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.722136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.722166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.722370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.722398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.722788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.722820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.723192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.723228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.723581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.723611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.723840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.723874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.724147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.724177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.724538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.724567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.725021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.725052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.725416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.725445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.725829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.725860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.726243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.726272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.726540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.726569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.727057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.727088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.727443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.727471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.727718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.727760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.728126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.728157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.728409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.728439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.728668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.728700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.729006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.729038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.729446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.868 [2024-11-06 13:26:24.729476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.868 qpair failed and we were unable to recover it. 00:29:42.868 [2024-11-06 13:26:24.729834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.729865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.730252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.730282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.730667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.730697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.731071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.731103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.731364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.731393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.731802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.731834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.732212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.732244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.732595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.732626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.732884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.732917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.733177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.733207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.733455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.733485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.733874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.733905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.734255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.734283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.734685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.734713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.735123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.735153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.735525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.735555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:42.869 [2024-11-06 13:26:24.735812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.869 [2024-11-06 13:26:24.735844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:42.869 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.736241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.736272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.736633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.736664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.736949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.736980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.737359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.737388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.737783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.737816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.739720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.739810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.740125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.740161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.740533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.740563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.740972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.741004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.741433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.741463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.741832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.741864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.742229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.742260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.742610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.742641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.742906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.742936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.743208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.743237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.743511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.743540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.743901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.743933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.744365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.744395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.744775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.744809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.745248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.745279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.745646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.745676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.746054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.746085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.746191] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:29:43.143 [2024-11-06 13:26:24.746250] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.143 [2024-11-06 13:26:24.746499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.746528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.746874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.143 [2024-11-06 13:26:24.746905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.143 qpair failed and we were unable to recover it. 00:29:43.143 [2024-11-06 13:26:24.747332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.747361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.747738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.747803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.748184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.748215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.748464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.748494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.748879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.748910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.749320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.749350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.749724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.749767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.750199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.750229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.750597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.750626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.750879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.750914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.751277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.751307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.751683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.751712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.752097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.752129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.752397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.752427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.752670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.752700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.752971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.753004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.753369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.753399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.753643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.753673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.753840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.753871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.754252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.754283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.754670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.754701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.754994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.755025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.755267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.755297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.755653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.755683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.756058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.756088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.756463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.756493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.756873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.756904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.757292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.757322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.757463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.757492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.757859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.757891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.758254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.758284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.758667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.758697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.758925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.758955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.759221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.759260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.759619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.759649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.760017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.760048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.760302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.760331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.760686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.144 [2024-11-06 13:26:24.760715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.144 qpair failed and we were unable to recover it. 00:29:43.144 [2024-11-06 13:26:24.760963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.760997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.761357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.761387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.761727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.761771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.762136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.762165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.762537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.762565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.763066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.763098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.763465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.763501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.763867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.763899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.764274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.764303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.764692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.764722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.765148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.765178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.765548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.765578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.765995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.766026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.766399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.766428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.766787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.766818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.767176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.767205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.767584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.767613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.767985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.768015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.768359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.768388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.768637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.768669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.769031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.769061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.769426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.769455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.769826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.769858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.770133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.770162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.770518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.770549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.770930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.770961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.771329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.771358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.771616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.771645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.772023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.772052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.772423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.772452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.772727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.772767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.773165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.773195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.773562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.773590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.773957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.773987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.774366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.774396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.774769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.774808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.775179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.775209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.775572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.145 [2024-11-06 13:26:24.775602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.145 qpair failed and we were unable to recover it. 00:29:43.145 [2024-11-06 13:26:24.775876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.775908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.776297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.776326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.776689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.776718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.777094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.777124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.777501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.777531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.777899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.777928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.778190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.778219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.778585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.778613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.778869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.778899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.779280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.779310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.779676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.779705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.779993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.780025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.780274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.780302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.780656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.780686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.780914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.780945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.781305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.781335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.781598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.781628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.781978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.782009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.782389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.782418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.782779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.782809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.783172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.783202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.783576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.783607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.783979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.784009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.784384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.784414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.784651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.784686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.785076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.785106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.785280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.785309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.785682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.785713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.786077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.786107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.786474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.786503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.786864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.786896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.787305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.787337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.787604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.787633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.787992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.788024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.788310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.788340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.788734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.788777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.789130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.789160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.789552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.146 [2024-11-06 13:26:24.789588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.146 qpair failed and we were unable to recover it. 00:29:43.146 [2024-11-06 13:26:24.789939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.789968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.790240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.790270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.790628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.790659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.790932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.790964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.791316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.791347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.791714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.791743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.792117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.792147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.792513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.792543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.792940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.792972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.793199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.793229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.793611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.793641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.794002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.794033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.794402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.794431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.794676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.794706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.795091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.795122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.795353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.795382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.795743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.795787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.796089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.796118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.796490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.796518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.796876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.796908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.797294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.797327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.797677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.797707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.798093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.798124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.798498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.798528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.798899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.798930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.799299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.799329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.799688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.799719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.800102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.800134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.800369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.800398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.800760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.800791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.801043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.801072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.801449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.801479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.801726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.147 [2024-11-06 13:26:24.801770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.147 qpair failed and we were unable to recover it. 00:29:43.147 [2024-11-06 13:26:24.802119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.802149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.802514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.802544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.802913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.802944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.803316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.803345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.803590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.803622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.803987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.804019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.804385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.804417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.804860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.804892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.805249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.805280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.805652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.805682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.806151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.806183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.806529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.806560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.806700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.806735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.807142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.807173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.807534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.807564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.807917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.807947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.808323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.808353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.808708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.808738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.809114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.809146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.809376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.809407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.809805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.809837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.810102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.810132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.810536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.810567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.810946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.810978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.811341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.811370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.811736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.811796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.812199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.812232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.812609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.812639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.813004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.813036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.813385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.813415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.813768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.813799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.814158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.814188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.814560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.814589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.814932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.814970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.815203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.815237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.815517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.815546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.148 [2024-11-06 13:26:24.815912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.148 [2024-11-06 13:26:24.815944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.148 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.816303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.816333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.816695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.816724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.817113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.817143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.817514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.817544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.817918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.817949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.818316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.818346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.818714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.818743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.819123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.819153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.819534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.819566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.819802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.819832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.820223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.820254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.820621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.820652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.820879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.820913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.821296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.821327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.821702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.821734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.822023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.822053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.822310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.822339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.822725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.822781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.823127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.823156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.823535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.823564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.823947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.823977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.824344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.824374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.824739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.824780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.825157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.825187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.825550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.825580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.825929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.825960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.826209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.826238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.826485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.826518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.826960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.826991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.827385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.827413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.827780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.827811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.828225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.828253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.828504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.828534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.828907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.828938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.829248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.829277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.829669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.829698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.830075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.830112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.149 [2024-11-06 13:26:24.830369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.149 [2024-11-06 13:26:24.830398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.149 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.830769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.830800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.831175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.831204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.831440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.831469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.831857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.831889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.832132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.832160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.832485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.832514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.832891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.832921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.833268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.833299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.833691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.833720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.833971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.833999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.834429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.834458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.834835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.834865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.835234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.835263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.835640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.835671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.835933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.835964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.836314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.836343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.836713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.836742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.837161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.837190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.837550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.837578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.837934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.837964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.838349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.838379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.838680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.838708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.838975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.839005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.839371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.839400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.839660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.839694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.840152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.840184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.840433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.840461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.840725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.840769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.841013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.841042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.841380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.841409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.841789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.841820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.842116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.842144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.842496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.842525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.842866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.842897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.843273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.843303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.843667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.843696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.844069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.844099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.844488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.150 [2024-11-06 13:26:24.844518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.150 qpair failed and we were unable to recover it. 00:29:43.150 [2024-11-06 13:26:24.844936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.844974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.845337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.845367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.845757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.845788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.846138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.846168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.846548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.846577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.846837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.846867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.847276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.847305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.847725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.847787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.848162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.848192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.848445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:43.151 [2024-11-06 13:26:24.848578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.848607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.848973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.849003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.849362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.849392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.849775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.849805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.850047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.850081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.850433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.850462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.850809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.850848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.851108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.851136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.851515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.851545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.851788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.851822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.852178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.852208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.852459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.852488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.852849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.852880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.853261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.853291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.853667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.853697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.854065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.854097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.854351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.854380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.854779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.854811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.855191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.855220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.855625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.855657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.856002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.856033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.856392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.856422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.856796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.856827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.857194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.857223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.857661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.857692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.857958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.857991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.858405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.858436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.858788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.858819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.859177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.151 [2024-11-06 13:26:24.859207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.151 qpair failed and we were unable to recover it. 00:29:43.151 [2024-11-06 13:26:24.859459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.859487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.859894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.859926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.860296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.860325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.860711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.860741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.861126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.861157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.861543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.861573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.861958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.861988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.862299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.862327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.862729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.862773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.863175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.863204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.863468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.863497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.863946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.863976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.864333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.864363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.864738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.864781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.865192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.865221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.865515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.865553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.865918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.865947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.866323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.866354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.866742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.866784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.867152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.867182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.867444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.867477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.867761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.867792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.868050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.868080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.868317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.868346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.868718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.868771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.869149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.869178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.869534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.869564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.869945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.869976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.870210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.870241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.870483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.870512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.870764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.870794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.871061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.871090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.871484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.871514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.152 qpair failed and we were unable to recover it. 00:29:43.152 [2024-11-06 13:26:24.871936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.152 [2024-11-06 13:26:24.871968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.872338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.872367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.872628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.872657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.872803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.872833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.873224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.873253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.873485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.873517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.873889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.873922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.874293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.874323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.874766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.874799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.875067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.875099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.875489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.875518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.875880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.875912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.876276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.876305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.876675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.876703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.877077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.877107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.877529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.877558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.877926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.877958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.878299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.878329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.878700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.878728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.879111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.879141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.879398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.879428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.879725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.879767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.880143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.880179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.880490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.880518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.880873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.880904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.881351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.881380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.881728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.881780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.882130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.882159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.882500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.882531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.882970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.883002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.883348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.883377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.883712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.883740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.884018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.884048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.884406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.884435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.884795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.884826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.885195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.885224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.885584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.885614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.886011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.886041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.153 [2024-11-06 13:26:24.886410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.153 [2024-11-06 13:26:24.886438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.153 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.886616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.886648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.886922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.886953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.887318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.887348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.887653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.887683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.887918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.887949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.888331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.888360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.888724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.888767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.889145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.889174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.889438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.889467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.889821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.889853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.890241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.890270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.890523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.890551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.890908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.890938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.891303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.891332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.891703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.891732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.892094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.892124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.892382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.892410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.892642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.892673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.893003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.893033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.893402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.893431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.893709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.893739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.894122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.894151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.894522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.894551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.894921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.894958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.895317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.895347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.895664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.895693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.896035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.896066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.896427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.896458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.896847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.896877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.897096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.897125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.897376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.897406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.897664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.897693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.898076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.898106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.898447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.898478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.898824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.898856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.899219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.899250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.899616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.899646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.899990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.154 [2024-11-06 13:26:24.900020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.154 qpair failed and we were unable to recover it. 00:29:43.154 [2024-11-06 13:26:24.900272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.900301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.900599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.900629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.900887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.900918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.901269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.901299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.901673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.901701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.902100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.902131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.902549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.902577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.902719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.155 [2024-11-06 13:26:24.902775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.155 [2024-11-06 13:26:24.902783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.155 [2024-11-06 13:26:24.902791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.155 [2024-11-06 13:26:24.902797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.155 [2024-11-06 13:26:24.902973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.903005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.903356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.903386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.903653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.903683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.904051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.904083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.904337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.904367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.904605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.904638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.904990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.905019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.905082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:43.155 [2024-11-06 13:26:24.905240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:43.155 [2024-11-06 13:26:24.905395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.905392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:43.155 [2024-11-06 13:26:24.905425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.905392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:43.155 [2024-11-06 13:26:24.905811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.905843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.906191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.906220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.906590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.906619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.906986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.907018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.907374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.907403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.907772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.907805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.908202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.908233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.908588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.908623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.908953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.908983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.909376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.909406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.909779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.909812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.910107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.910136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.910287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.910317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.910683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.910712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.910987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.911016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.911381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.911410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.911655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.911689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.912055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.912086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.912358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.912387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.155 [2024-11-06 13:26:24.912743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.155 [2024-11-06 13:26:24.912786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.155 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.913205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.913235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.913617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.913647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.913916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.913947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.914216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.914246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.914612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.914641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.915091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.915123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.915470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.915505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.915927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.915957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.916322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.916351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.916729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.916774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.917007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.917037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.917288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.917317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.917673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.917703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.918100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.918130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.918394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.918424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.918787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.918820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.919211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.919238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.919597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.919626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.919973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.920005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.920386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.920415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.920791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.920823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.921073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.921103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.921455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.921485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.921858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.921889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.922137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.922167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.922396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.922429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.922791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.922822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.923197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.923234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.923579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.923610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.924093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.924126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.924372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.924403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.924793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.924824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.925219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.925247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.925498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.925529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.925783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.156 [2024-11-06 13:26:24.925823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.156 qpair failed and we were unable to recover it. 00:29:43.156 [2024-11-06 13:26:24.926193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.926223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.926461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.926489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.926844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.926875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.927257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.927286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.927506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.927537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.927878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.927908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.928284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.928315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.928682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.928713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.928944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.928975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.929308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.929345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.929701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.929733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.930108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.930139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.930384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.930414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.930802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.930833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.931211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.931240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.931479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.931510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.931796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.931826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.932084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.932113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.932258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.932287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.932427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.932455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.932804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.932835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.933204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.933233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.933467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.933496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.933857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.933888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.934286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.934315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.934689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.934719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.934987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.935017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.935419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.935448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.935690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.935723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.936129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.936160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.936358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.936389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.936740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.936795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.937047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.937084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.937524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.937555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.937809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.937840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.938227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.938257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.157 [2024-11-06 13:26:24.938493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.157 [2024-11-06 13:26:24.938523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.157 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.938902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.938934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.939305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.939335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.939567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.939597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.939799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.939830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.940224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.940254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.940353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.940381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.940674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d5f30 is same with the state(6) to be set 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Write completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Write completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Write completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Write completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Write completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Write completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Write completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 Read completed with error (sct=0, sc=8) 00:29:43.158 starting I/O failed 00:29:43.158 [2024-11-06 13:26:24.941719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.158 [2024-11-06 13:26:24.941985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.942055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.942318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.942350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.942702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.942730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.943126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.943156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.943514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.943545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.943784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.943819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.944210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.944241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.944513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.944543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.944898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.944927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.945293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.945324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.945703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.945735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.946112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.946142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.946400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.946433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.946813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.946844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.947223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.947253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.947614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.947644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.948052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.948082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.948448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.948476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.948844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.948874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.949143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.949172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.158 [2024-11-06 13:26:24.949421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.158 [2024-11-06 13:26:24.949450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.158 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.949801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.949832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.950210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.950241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.950458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.950486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.950735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.950798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.951040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.951073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.951353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.951382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.951754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.951784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.952239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.952268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.952643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.952675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.953024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.953057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.953403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.953431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.953828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.953858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.954258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.954289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.954681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.954711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.955114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.955152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.955443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.955472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.955897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.955928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.956290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.956320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.956573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.956602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.956977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.957008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.957380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.957408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.957532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.957559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.957917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.957947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.958328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.958357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.958725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.958763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.959120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.959151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.959510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.959540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.959877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.959906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.960296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.960325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.960688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.960724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.960990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.961020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.961265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.961294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.961658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.961688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.961959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.961988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.962356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.962385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.962613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.962642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.962931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.962963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.963342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.159 [2024-11-06 13:26:24.963370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.159 qpair failed and we were unable to recover it. 00:29:43.159 [2024-11-06 13:26:24.963618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.963649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.964006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.964037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.964398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.964428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.964839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.964870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.965093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.965123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.965563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.965594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.965963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.965993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.966245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.966276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.966657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.966686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.966913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.966943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.967274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.967305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.967665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.967695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.968019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.968049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.968424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.968453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.968825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.968857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.968961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.968988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.969279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.969314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.969531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.969562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.969951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.969981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.970366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.970397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.970638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.970669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.971026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.971057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.971446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.971475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.971698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.971727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.971988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.972018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.972386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.972417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.972769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.972801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.973146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.973175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.973549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.973580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.973928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.973958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.974346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.974376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.974629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.974657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.974887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.974918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.975304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.975334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.975711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.975741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.976100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.976131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.976254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.976283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.976619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.976649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.160 qpair failed and we were unable to recover it. 00:29:43.160 [2024-11-06 13:26:24.976878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.160 [2024-11-06 13:26:24.976909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.977132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.977162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.977404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.977433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.977804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.977836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.978091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.978121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.978490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.978520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.978982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.979014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.979273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.979302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.979533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.979563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.979813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.979843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.980233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.980263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.980511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.980540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.980812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.980842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.981221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.981249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.981484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.981513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.981731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.981770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.981873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.981900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.982266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.982295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.982639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.982673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.983032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.983063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.983416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.983447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.983670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.983699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.984091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.984121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.984399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.984427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.984810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.984840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.985082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.985111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.985492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.985521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.985908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.985939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.986292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.986320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.986567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.986601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.986958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.986989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.987281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.987310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.987603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.987631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.987857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.161 [2024-11-06 13:26:24.987888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.161 qpair failed and we were unable to recover it. 00:29:43.161 [2024-11-06 13:26:24.988262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.988290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.988593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.988622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.988862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.988896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.989057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.989087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.989512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.989541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.989912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.989942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.990313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.990341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.990563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.990591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.990935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.990966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.991357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.991386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.991618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.991648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.991899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.991932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.992304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.992333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.992693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.992724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.993085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.993115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.993483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.993513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.993728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.993768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.994138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.994167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.994536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.994565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.994814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.994843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.995219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.995248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.995591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.995622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.995837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.995867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.996230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.996258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.996350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.996394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.996729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.996766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.996885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.996912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.997310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.997339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.997695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.997725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.998169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.998199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.998580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.998609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.998955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.998986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.999365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.999394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:24.999766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:24.999796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:25.000153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:25.000182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:25.000548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:25.000577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:25.000800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:25.000830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:25.001095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:25.001126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.162 qpair failed and we were unable to recover it. 00:29:43.162 [2024-11-06 13:26:25.001336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.162 [2024-11-06 13:26:25.001366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.001709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.001739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.002108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.002138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.002505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.002535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.002888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.002919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.003015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.003041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.003254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.003281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.003517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.003546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.003908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.003938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.004108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.004137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.004382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.004412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.004761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.004791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.005003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.005033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.005393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.005422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.005803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.005832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.006199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.006227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.006596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.006626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.007006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.007036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.007416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.007444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.007827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.007856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.008206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.008234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.008605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.008635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.009011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.009040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.009416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.009444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.009861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.009890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.010260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.010290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.010725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.010777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.011107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.011136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.011504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.011533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.011769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.011799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.012166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.012195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.012413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.012441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.012675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.012706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.012965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.012994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.013235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.013264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.013478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.013508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.013763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.013792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.014004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.014041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.163 [2024-11-06 13:26:25.014189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.163 [2024-11-06 13:26:25.014221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.163 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.014569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.014597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.014843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.014873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.015290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.015319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.015575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.015604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.015903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.015934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.016216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.016243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.016622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.016651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.017050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.017080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.017301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.017331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.017701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.017730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.018100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.018130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.018371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.018401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.018607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.018636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.018838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.018870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.019242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.019272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.019526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.019558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.019929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.019961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.020209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.020239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.020620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.020650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.020792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.020823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.021196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.021226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.021618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.021647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.022027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.022057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.022438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.022469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.022839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.022869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.023265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.023294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.023665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.023696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.023936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.023973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.024323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.024352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.024721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.024760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.025142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.025173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.025534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.025563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.025786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.025818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.026048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.026077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.026328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.026359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.026580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.026611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.026854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.164 [2024-11-06 13:26:25.026885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.164 qpair failed and we were unable to recover it. 00:29:43.164 [2024-11-06 13:26:25.027127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.027157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.027427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.027456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.027822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.027853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.028102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.028132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.028502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.028531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.028897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.028929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.029178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.029207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.029566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.029595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.029987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.030019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.030261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.030292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.030499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.030529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.165 [2024-11-06 13:26:25.030744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.165 [2024-11-06 13:26:25.030785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.165 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.031022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.031054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.031162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.031190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.031449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.031478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.031906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.031937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.032307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.032337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.032771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.032802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.032920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.032951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.033251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.033282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.033384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.033413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.033821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.033854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.034206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.034239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.034599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.034628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.034857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.034887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.035255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.035284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.035657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.439 [2024-11-06 13:26:25.035686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.439 qpair failed and we were unable to recover it. 00:29:43.439 [2024-11-06 13:26:25.036057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.036088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.036447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.036477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.036834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.036863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.037113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.037148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.037526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.037554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.037979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.038009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.038267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.038298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.038544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.038573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.038923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.038954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.039330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.039358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.039453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.039479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.039793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.039823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.040190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.040219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.040468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.040501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.040856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.040886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.041108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.041136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.041500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.041529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.041906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.041937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.042358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.042386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.042839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.042869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.043244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.043272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.043624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.043651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.043998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.044028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.044405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.044433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.044667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.044694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.044993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.045022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.045424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.045453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.045824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.045855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.046201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.046237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.046475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.046503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.046607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.046637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.047136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.047245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.047586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.047626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.048054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.048156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.048589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.048626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.048884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.048915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.049153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.049188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.049623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.049653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.440 qpair failed and we were unable to recover it. 00:29:43.440 [2024-11-06 13:26:25.049889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.440 [2024-11-06 13:26:25.049919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.050306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.050336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.050592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.050621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.050967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.050998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.051333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.051362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.051629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.051670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.051927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.051958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.052169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.052197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.052585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.052613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.052886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.052915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.053266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.053295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.053681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.053710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.054061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.054091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.054320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.054348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.054594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.054623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.054973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.055003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.055092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.055119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.055441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.055469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.055729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.055777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.056145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.056175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.056394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.056423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.056655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.056683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.056961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.056991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.057263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.057291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.057664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.057693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.057918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.057947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.058312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.058341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.058729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.058771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.059125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.059153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.059389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.059418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.059803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.059834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.060044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.060073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.060282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.060311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.060675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.060705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.061080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.061110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.061337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.061365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.061723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.061760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.062127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.062155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.441 [2024-11-06 13:26:25.062538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.441 [2024-11-06 13:26:25.062566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.441 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.062916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.062945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.063324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.063351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.063562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.063590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.063928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.063958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.064329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.064356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.064588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.064619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.064872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.064918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.065301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.065328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.065695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.065723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.065980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.066010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.066367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.066395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.066793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.066825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.067082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.067111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.067605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.067635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.067886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.067917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.068160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.068190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.068576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.068605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.068916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.068946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.069310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.069340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.069555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.069585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.069930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.069961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.070205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.070235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.070590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.070619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.070855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.070886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.071242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.071272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.071539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.071569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.071821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.071852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.072084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.072118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.072501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.072531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.072900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.072931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.073347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.073376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.073601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.073633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.074027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.074059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.074427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.074458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.074815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.074847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.075218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.075248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.075474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.075503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.075744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.442 [2024-11-06 13:26:25.075784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.442 qpair failed and we were unable to recover it. 00:29:43.442 [2024-11-06 13:26:25.076149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.076179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.076537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.076567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.076809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.076841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.077253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.077282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.077639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.077670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.078027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.078057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.078265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.078294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.078656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.078685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.078799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.078838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.078934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.078963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.079198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.079227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.079467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.079497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.079956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.079989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.080383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.080413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.080814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.080845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.081254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.081284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.081641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.081671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.081901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.081932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.082282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.082312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.082720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.082758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.082999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.083028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.083459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.083487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.083694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.083723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.084169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.084199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.084415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.084447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.084717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.084755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.084869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.084896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.085160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.085188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.085574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.085603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.085852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.085882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.086109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.086141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.086386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.086415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.086826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.086856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.087198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.087227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.087440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.087469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.087715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.087751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.087968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.087997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.088252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.443 [2024-11-06 13:26:25.088281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.443 qpair failed and we were unable to recover it. 00:29:43.443 [2024-11-06 13:26:25.088644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.088674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.089041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.089073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.089454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.089484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.089852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.089884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.090239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.090268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.090649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.090678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.091069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.091099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.091343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.091372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.091609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.091638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.092023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.092053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.092433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.092469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.092695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.092724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.092995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.093026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.093255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.093284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.093531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.093560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.093659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.093687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Write completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Write completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Write completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Write completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Write completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Write completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Write completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Write completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Write completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 Read completed with error (sct=0, sc=8) 00:29:43.444 starting I/O failed 00:29:43.444 [2024-11-06 13:26:25.094507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:43.444 [2024-11-06 13:26:25.095075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.095192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.095531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.095567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.095813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.095870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.096294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.096324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.096557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.096585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.096691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.444 [2024-11-06 13:26:25.096720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.444 qpair failed and we were unable to recover it. 00:29:43.444 [2024-11-06 13:26:25.097019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.097050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.097457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.097486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.097613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.097648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.097869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.097899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.098267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.098305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.098651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.098680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.098900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.098931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.099334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.099362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.099742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.099784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.100195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.100224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.100595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.100623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.100865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.100896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.101124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.101152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.101363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.101391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.101742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.101784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.102121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.102149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.102406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.102439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.102811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.102842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.103221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.103250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.103717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.103755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.104141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.104172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.104569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.104597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.105053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.105160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.105618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.105654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.106047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.106148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.106570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.106607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.106969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.107001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.107462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.107491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.107724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.107774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.107995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.108024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.108376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.108404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.108766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.108797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.108892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.108920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.109056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.109083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.109456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.109485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.109763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.109809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.110180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.110208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.110576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.110604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.445 [2024-11-06 13:26:25.110869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.445 [2024-11-06 13:26:25.110900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.445 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.111270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.111299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.111556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.111584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.111916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.111946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.112318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.112347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.112614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.112642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.113036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.113065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.113430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.113459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.113839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.113869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.114228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.114258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.114696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.114725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.114980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.115011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.115331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.115369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.115656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.115686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.116074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.116104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.116315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.116344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.116585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.116615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.116975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.117007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.117192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.117220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.117454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.117485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.117840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.117871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.118212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.118240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.118604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.118633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.119008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.119038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.119375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.119412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.119771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.119801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.120030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.120059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.120450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.120479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.120613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.120640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.121000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.121031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.121271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.121299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.121517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.121546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.121784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.121814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.122046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.122079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.122467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.122495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.122870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.122901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.123160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.123193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.123582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.123611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.446 [2024-11-06 13:26:25.123905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.446 [2024-11-06 13:26:25.123936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.446 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.124337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.124366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.124598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.124626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.124953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.124984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.125249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.125278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.125631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.125660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.126020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.126050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.126408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.126437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.126808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.126839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.127187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.127217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.127590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.127619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.128014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.128044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.128395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.128425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.128819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.128851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.129213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.129243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.129613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.129643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.129894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.129924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.130081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.130111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.130464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.130493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.130877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.130908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.131136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.131164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.131375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.131403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.131781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.131811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.132170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.132199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.132465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.132494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.132858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.132890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.133248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.133283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.133503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.133532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.133784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.133816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.134198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.134228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.134453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.134482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.134877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.134907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.135153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.135181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.135544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.135573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.135951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.135981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.136348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.136376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.136743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.136785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.137189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.137219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.137657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.447 [2024-11-06 13:26:25.137685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.447 qpair failed and we were unable to recover it. 00:29:43.447 [2024-11-06 13:26:25.138066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.138097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.138463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.138492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.138871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.138900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.139252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.139280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.139651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.139680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.140066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.140096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.140348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.140376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.140593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.140622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.140862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.140891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.141142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.141171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.141531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.141560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.141865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.141895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.142140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.142169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.142389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.142418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.142792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.142822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.143195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.143231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.143601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.143630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.143845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.143874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.144270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.144298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.144708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.144737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.145106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.145137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.145511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.145540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.145913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.145945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.146241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.146270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.146522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.146550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.146998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.147028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.147335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.147364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.147588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.147629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.147979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.148011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.148222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.148251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.148639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.148668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.148884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.148913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.149291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.149320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.448 [2024-11-06 13:26:25.149570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.448 [2024-11-06 13:26:25.149602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.448 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.149973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.150004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.150138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.150166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.150533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.150563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.150909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.150939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.151181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.151212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.151618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.151648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.152029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.152059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.152284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.152314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.152539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.152568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.152862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.152893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.153252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.153282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.153647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.153675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.153921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.153952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.154398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.154427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.154802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.154831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.155248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.155278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.155635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.155665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.156051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.156081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.156305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.156334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.156530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.156560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.156799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.156829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.157207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.157236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.157606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.157635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.157856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.157887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.158095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.158124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.158511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.158539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.158918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.158949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.159333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.159362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.159614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.159642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.160016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.160046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.160421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.160451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.160817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.160847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.161221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.161252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.161613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.161651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.161883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.161912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.162304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.162333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.162699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.162727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.449 [2024-11-06 13:26:25.162838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.449 [2024-11-06 13:26:25.162866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.449 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.163239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.163269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.163585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.163613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.163855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.163885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.164167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.164196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.164559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.164587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.165017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.165047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.165274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.165304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.165673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.165702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.166076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.166106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.166484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.166513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.166878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.166909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.167151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.167180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.167270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.167298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.167619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.167648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.167860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.167890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.168289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.168319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.168690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.168720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.169090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.169120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.169462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.169491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.169720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.169771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.170037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.170067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.170313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.170342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.170713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.170744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.170872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.170899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.171272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.171302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.171673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.171703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.172071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.172102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.172348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.172378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.172597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.172627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.173015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.173048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.173411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.173441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.173808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.173838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.174065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.174093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.174309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.174338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.174722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.174757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.175117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.175151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.175403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.175432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.175650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.450 [2024-11-06 13:26:25.175681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.450 qpair failed and we were unable to recover it. 00:29:43.450 [2024-11-06 13:26:25.175893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.175923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.176285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.176314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.176690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.176719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.177136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.177166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.177530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.177559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.177812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.177843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.178192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.178221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.178581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.178610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.179009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.179040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.179275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.179304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.179397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.179423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.179943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.180047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.180503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.180540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.180820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.180874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.181130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.181160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.181540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.181569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.181810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.181860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.182114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.182143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.182514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.182543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.182924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.182953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.183341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.183370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.183768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.183801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.184027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.184061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.184349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.184377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.184769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.184801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.185200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.185230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.185463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.185491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.185879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.185910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.186153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.186186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.186595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.186626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.186975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.187005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.187429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.187457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.187810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.187839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.188240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.188269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.188495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.188523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.188900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.188930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.189166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.189197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.189578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.189615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.451 qpair failed and we were unable to recover it. 00:29:43.451 [2024-11-06 13:26:25.189983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.451 [2024-11-06 13:26:25.190014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.190240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.190268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.190706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.190736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.191109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.191139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.191562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.191591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.191866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.191895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.192286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.192315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.192581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.192610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.192905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.192935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.193176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.193204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.193559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.193596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.193771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.193802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.194190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.194218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.194434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.194463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.194766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.194795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.195045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.195073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.195284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.195312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.195436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.195463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.195692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.195724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.196111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.196142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.196507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.196536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.196962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.196992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.197345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.197373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.197804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.197835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.198065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.198094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.198470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.198499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.198764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.198795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.199149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.199178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.199426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.199455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.199814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.199844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.200204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.200233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.200617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.200646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.200978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.201008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.201315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.201343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.201453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.201485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.201737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.201780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.202031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.202060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.202304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.202333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.452 [2024-11-06 13:26:25.202797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.452 [2024-11-06 13:26:25.202827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.452 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.203197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.203232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.203459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.203488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.203883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.203913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.204092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.204120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.204456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.204485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.204696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.204725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.205114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.205143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.205250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.205279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.205381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.205408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.205739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.205777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.206156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.206185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.206412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.206441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.206616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.206645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.206863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.206894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.207261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.207290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.207656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.207687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.208063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.208094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.208484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.208512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.208877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.208908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.209257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.209286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.209654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.209684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.209960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.209990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.210334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.210363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.210789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.210818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.211182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.211218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.211442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.211470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.211836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.211865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.212111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.212141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.212495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.212523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.212787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.212820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.213215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.213243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.213623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.213652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.214040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.214071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.453 [2024-11-06 13:26:25.214293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.453 [2024-11-06 13:26:25.214322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.453 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.214685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.214714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.215093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.215124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.215349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.215377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.215623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.215652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.215936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.215966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.216199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.216227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.216475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.216514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.216882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.216912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.217252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.217280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.217637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.217665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.218048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.218077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.218439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.218477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.218831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.218861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.219278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.219306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.219676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.219712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.220094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.220125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.220489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.220518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.220793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.220823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.221222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.221252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.221483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.221515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.221641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.221672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.222012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.222042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.222260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.222288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.222657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.222686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.223063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.223093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.223373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.223405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.223522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.223549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.223824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.223853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.224189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.224217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.224600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.224630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.224871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.224901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.225273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.225302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.225673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.225702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.226126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.226157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.226369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.226397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.226784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.226815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.227173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.227203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.454 [2024-11-06 13:26:25.227555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.454 [2024-11-06 13:26:25.227583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.454 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.227801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.227831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.228228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.228256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.228640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.228668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.228898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.228928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.229396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.229424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.229676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.229703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.230109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.230138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.230358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.230385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.230752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.230789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.231177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.231206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.231576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.231604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.231889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.231919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.232281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.232310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.232682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.232709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.232858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.232887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.233297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.233325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.233694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.233723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.234091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.234120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.234493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.234522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.234785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.234820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.235170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.235200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.235591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.235619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.235855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.235885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.236005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.236037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.236385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.236414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.236786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.236816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.237226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.237254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.237688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.237716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.237949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.237980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.238333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.238361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.238733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.238771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.239137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.239165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.239390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.239419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.239774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.239804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.240053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.240081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.240471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.240501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.240882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.240912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.241298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.455 [2024-11-06 13:26:25.241326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.455 qpair failed and we were unable to recover it. 00:29:43.455 [2024-11-06 13:26:25.241697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.241725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.242108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.242136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.242493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.242523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.242873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.242910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.243134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.243162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.243393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.243421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.243682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.243710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.243948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.243977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.244245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.244274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.244504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.244532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.244890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.244926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.245300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.245328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.245681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.245709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.245805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.245833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.246287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.246387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.246680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.246719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.247082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.247187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.247484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.247522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.247690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.247721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.248069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.248099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.248468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.248497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.248874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.248905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.249137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.249166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.249423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.249457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.249706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.249736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.249974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.250004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.250246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.250274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.250626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.250657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.250910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.250941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.251301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.251331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.251684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.251712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.252100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.252131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.252509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.252539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.252792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.252822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.253234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.253263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.253621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.253651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.253878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.253908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.254328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.254368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.456 qpair failed and we were unable to recover it. 00:29:43.456 [2024-11-06 13:26:25.254708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.456 [2024-11-06 13:26:25.254740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.255121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.255149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.255499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.255529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.255902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.255932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.256179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.256207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.256622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.256650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.256971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.257003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.257355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.257383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.257616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.257644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.257903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.257934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.258303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.258331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.258686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.258714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.259066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.259103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.259353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.259381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.259629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.259662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.259919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.259949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.260042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.260071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Read completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 Write completed with error (sct=0, sc=8) 00:29:43.457 starting I/O failed 00:29:43.457 [2024-11-06 13:26:25.260859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.457 [2024-11-06 13:26:25.261178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.261238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.261498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.261528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.261929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.261961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.262214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.262243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.457 [2024-11-06 13:26:25.262490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.457 [2024-11-06 13:26:25.262518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.457 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.262924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.262954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.263291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.263320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.263699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.263728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.263973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.264003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.264257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.264286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.264531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.264563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.264826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.264858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.265256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.265285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.265503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.265531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.265806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.265840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.266223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.266261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.266628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.266657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.267029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.267059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.267479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.267509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.267891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.267921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.268164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.268193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.268418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.268455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.268706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.268735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.268976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.269006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.269378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.269407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.269787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.269818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.270182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.270211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.270595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.270624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.271006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.271035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.271424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.271454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.271719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.271758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.272166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.272196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.272422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.272450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.272674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.272703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.272949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.272978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.273217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.273248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.273606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.273636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.274026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.274056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.274267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.274296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.274683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.274712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.275134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.275164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.275542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.275571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.275931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.458 [2024-11-06 13:26:25.275962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.458 qpair failed and we were unable to recover it. 00:29:43.458 [2024-11-06 13:26:25.276362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.276391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.276781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.276811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.277179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.277208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.277444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.277473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.277773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.277803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.278017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.278045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.278406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.278434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.278791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.278820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.279049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.279078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.279438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.279466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.279719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.279757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.279984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.280013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.280399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.280433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.280796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.280827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.281071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.281102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.281493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.281522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.281897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.281927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.282315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.282344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.282557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.282585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.282861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.282890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.283238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.283266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.283498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.283526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.283772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.283802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.284165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.284193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.284558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.284587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.284823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.284852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.285271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.285300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.285687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.285716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.286102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.286132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.286353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.286381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.286756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.286787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.287033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.287061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.287415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.287442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.287776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.287806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.288030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.288058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.288268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.288298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.288412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.288444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.288798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.459 [2024-11-06 13:26:25.288828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.459 qpair failed and we were unable to recover it. 00:29:43.459 [2024-11-06 13:26:25.289194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.289223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.289599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.289628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.289877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.289907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.290325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.290353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.290738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.290773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.291151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.291179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.291563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.291591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.291967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.291997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.292344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.292372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.292738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.292778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.293152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.293180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.293603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.293633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.293996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.294026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.294241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.294269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.294599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.294642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.294923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.294953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.295301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.295331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.295561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.295590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.295954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.295984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.296339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.296367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.296659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.296687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.297075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.297106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.297334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.297365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.297609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.297641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.297861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.297891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.297991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.298018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.298265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.298293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.298669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.298698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.299092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.299122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.299337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.299365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.299736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.299777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.300136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.300166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.300516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.300544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.300778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.300808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.301190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.301219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.301592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.301621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.301860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.301888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.302112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.302139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.460 qpair failed and we were unable to recover it. 00:29:43.460 [2024-11-06 13:26:25.302361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.460 [2024-11-06 13:26:25.302393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.302769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.302798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.303161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.303190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.303556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.303586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.303968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.303997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.304345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.304375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.304631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.304660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.304788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.304817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.305142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.305170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.305414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.305443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.305807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.305836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.306201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.306229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.306615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.306644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.307024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.307053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.307416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.307444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.307808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.307839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.308225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.308260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.308472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.308501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.308756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.308786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.309001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.309029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.309279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.309308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.309540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.309573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.309946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.309976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.310302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.310332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.310722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.310761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.311124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.311153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.311375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.311403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.311637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.311665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.312044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.312076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.312313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.312341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.312711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.312740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.313107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.313137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.313507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.313537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.313795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.313825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.314180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.314209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.314443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.314472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.314855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.314885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.315256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.315285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.461 [2024-11-06 13:26:25.315663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.461 [2024-11-06 13:26:25.315692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.461 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.316060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.316090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.316444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.316473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.316845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.316876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.317286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.317314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.317611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.317639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.317845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.317875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.318104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.318132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.318465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.318493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.318719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.318755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.319130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.319159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.319544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.319572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.320013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.320043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.320492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.320522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.320893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.320923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.321142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.321170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.321525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.321555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.321922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.321953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.322326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.322361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.322721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.322758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.323119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.323147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.323405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.323432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.323656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.323685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.324055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.324085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.324435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.324465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.324689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.324717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.324985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.325018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.325387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.325417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.325586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.325613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.325837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.325866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.462 [2024-11-06 13:26:25.326077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.462 [2024-11-06 13:26:25.326106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.462 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.326461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.326490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.326869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.326901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.327275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.327305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.327682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.327711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.327960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.327989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.328207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.328235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.328590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.328619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.328995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.329024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.329416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.329446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.329807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.329837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.330214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.330242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.330596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.330624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.330843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.330872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.331190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.331219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.331449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.331478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.331835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.331891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.332128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.332156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.332536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.332564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.332939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.332970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.333339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.737 [2024-11-06 13:26:25.333367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.737 qpair failed and we were unable to recover it. 00:29:43.737 [2024-11-06 13:26:25.333727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.333767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.334105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.334134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.334489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.334518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.334891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.334922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.335289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.335317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.335687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.335716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.336168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.336196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.336568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.336628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.336986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.337016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.337364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.337392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.337729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.337765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.338154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.338183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.338556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.338585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.338941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.338971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.339311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.339339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.339546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.339574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.340003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.340033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.340255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.340286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.340649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.340678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.340886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.340915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.341165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.341194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.341487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.341516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.341740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.341778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.342139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.342168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.342540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.342569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.342777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.342807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.343190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.343220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.343590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.343620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.343987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.344017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.344393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.344422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.344856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.344887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.345275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.345303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.345683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.345711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.345931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.345961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.346239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.346268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.346486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.346515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.346777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.346806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.347062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.347093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.738 qpair failed and we were unable to recover it. 00:29:43.738 [2024-11-06 13:26:25.347489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.738 [2024-11-06 13:26:25.347517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.347891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.347921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.348148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.348177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.348549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.348577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.348823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.348852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.349251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.349280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.349655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.349683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.350063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.350092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.350459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.350487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.350863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.350899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.351170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.351199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.351584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.351612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.351965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.351995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.352207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.352235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.352480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.352511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.352872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.352902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.353121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.353149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.353541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.353569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.354018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.354048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.354422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.354452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.354824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.354853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.355227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.355256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.355625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.355653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.356074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.356103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.356447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.356475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.356695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.356724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.357113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.357142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.357517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.357545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.357794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.357824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.358162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.358190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.358442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.358474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.358824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.358854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.359089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.359117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.359318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.359345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.359563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.359591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.359785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.359816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.360070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.360099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.360354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.360386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.360754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.739 [2024-11-06 13:26:25.360784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.739 qpair failed and we were unable to recover it. 00:29:43.739 [2024-11-06 13:26:25.361003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.361032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.361292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.361321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.361591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.361620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.361976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.362005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.362383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.362411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.362787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.362816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.362921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.362948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.363189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.363221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.363590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.363618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.363877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.363907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.364296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.364338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.364549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.364578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.364975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.365004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.365370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.365400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.365776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.365806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.366070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.366101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.366470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.366499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.366765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.366795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.367201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.367229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.367595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.367623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.368004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.368035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.368316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.368345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.368707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.368735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.368970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.368999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.369372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.369401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.369773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.369804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.370014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.370044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.370409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.370438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.370854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.370884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.371271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.371299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.371523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.371551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.371789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.371819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.372052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.372080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.372310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.372341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.372554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.372584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.372959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.372989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.373260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.373287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.373502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.740 [2024-11-06 13:26:25.373531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.740 qpair failed and we were unable to recover it. 00:29:43.740 [2024-11-06 13:26:25.373761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.373791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.374219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.374248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.374399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.374427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.374794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.374824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.375115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.375143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.375501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.375531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.375892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.375922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.376291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.376320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.376548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.376576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.376973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.377003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.377266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.377294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.377647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.377678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.378069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.378107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.378512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.378541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.378906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.378937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.379329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.379358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.379743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.379783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.380161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.380189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.380584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.380614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.381021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.381052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.381282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.381314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.381702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.381731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.381976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.382005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.382236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.382265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.382495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.382526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.382670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.382699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.383103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.383134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.383491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.383520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.383887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.383917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.384305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.384334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.384711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.384741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.385116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.385145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.385521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.385550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.741 [2024-11-06 13:26:25.385926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.741 [2024-11-06 13:26:25.385955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.741 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.386264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.386293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.386540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.386568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.386984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.387013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.387381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.387408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.387763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.387793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.388010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.388039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.388282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.388312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.388664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.388693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.388928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.388957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.389322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.389350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.389727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.389765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.390007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.390035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.390417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.390445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.390807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.390836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.391207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.391235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.391603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.391632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.391868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.391897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.392285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.392313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.392697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.392731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.393017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.393046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.393388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.393420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.393661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.393692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.394101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.394133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.394490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.394519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.394769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.394800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.395211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.395240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.395682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.395712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.396069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.396099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.396326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.396354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.396760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.396789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.397138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.397169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.397536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.397564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.397951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.397982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.398383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.398412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.398839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.398869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.399106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.399134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.742 qpair failed and we were unable to recover it. 00:29:43.742 [2024-11-06 13:26:25.399386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.742 [2024-11-06 13:26:25.399415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.399780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.399810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.400190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.400219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.400608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.400637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.400874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.400904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.401283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.401312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.401683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.401712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.401939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.401969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.402197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.402225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.402624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.402654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.403002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.403032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.403245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.403273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.403496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.403529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.403888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.403919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.404296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.404325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.404559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.404587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.404938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.404968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.405361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.405389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.405767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.405797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.406025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.406054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.406452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.406481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.406764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.406794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.407122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.407159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.407539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.407571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.407926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.407957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.408325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.408356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.408625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.408654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.408871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.408901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.409114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.409145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.409527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.409556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.409786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.409815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.409914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.409943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.410335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.410365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.410574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.410603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.410843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.410875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.411145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.411173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.411396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.411426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.411701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.411731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.743 [2024-11-06 13:26:25.412177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.743 [2024-11-06 13:26:25.412207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.743 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.412548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.412580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.412928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.412958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.413337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.413365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.413751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.413781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.414159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.414188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.414570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.414599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.414954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.414985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.415399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.415427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.415795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.415824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.416198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.416226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.416590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.416620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.416992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.417022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.417365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.417394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.417735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.417771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.418008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.418036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.418250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.418279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.418509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.418540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.418778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.418807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.419190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.419219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.419435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.419463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.419552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.419580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.419865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.419895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.420274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.420302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.420666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.420696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.421084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.421114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.421469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.421497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.421870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.421901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.422276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.422305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.422682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.422711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.422974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.423003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.423224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.423252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.423630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.423659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.424019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.424049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.424393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.424421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.424796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.424826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.425180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.425209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.425473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.425502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.425717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.744 [2024-11-06 13:26:25.425754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.744 qpair failed and we were unable to recover it. 00:29:43.744 [2024-11-06 13:26:25.426015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.426046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.426257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.426287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.426564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.426592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.426836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.426867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.427236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.427265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.427630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.427659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.427992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.428021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.428253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.428284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.428522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.428550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.428920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.428950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.429337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.429366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.429742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.429779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.430175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.430212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.430572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.430601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.430972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.431003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.431380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.431409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.431779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.431809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.432185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.432213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.432461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.432489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.432854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.432884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.433150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.433178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.433603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.433632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.433975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.434006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.434344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.434373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.434739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.434791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.434891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.434917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.435373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.435403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.435768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.435797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.436157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.436186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.436631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.436659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.436870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.436900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.437296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.437324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.437735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.437774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.438185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.438214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.438597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.438625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.439009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.439039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.439438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.439468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.439836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.439867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.745 qpair failed and we were unable to recover it. 00:29:43.745 [2024-11-06 13:26:25.440102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.745 [2024-11-06 13:26:25.440129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.440433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.440463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.440780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.440810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.441076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.441104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.441501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.441529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.441763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.441793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.442049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.442077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.442428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.442456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.442697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.442729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.443113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.443142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.443532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.443560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.443927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.443958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.444306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.444337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.444715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.444753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.445146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.445181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.445429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.445457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.445813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.445842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.446098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.446129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.446377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.446407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.446818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.446849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.447056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.447086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.447542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.447572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.447966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.447997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.448128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.448160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.448314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.448341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.448670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.448699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.449008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.449037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.449413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.449443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.449670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.449702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.449962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.449995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.450274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.450301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.450675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.450704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.451064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.451095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.746 [2024-11-06 13:26:25.451451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.746 [2024-11-06 13:26:25.451480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.746 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.451742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.451800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.452167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.452196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.452491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.452520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.452913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.452943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.453171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.453200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.453444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.453472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.453727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.453763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.453895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.453923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.454175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.454202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.454415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.454447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.454653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.454682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.454921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.454952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.455080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.455111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.455487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.455515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.455883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.455914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.456142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.456170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.456614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.456643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.457013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.457043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.457411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.457441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.457668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.457698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.458093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.458131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.458376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.458407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.458775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.458806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.459164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.459193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.459609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.459637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.459845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.459875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.460260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.460289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.460541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.460570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.460813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.460844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.461220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.461248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.461698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.461728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.462074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.462103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.462524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.462555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.462901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.462931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.463307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.463336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.463570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.463601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.463976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.464007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.464272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.747 [2024-11-06 13:26:25.464300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.747 qpair failed and we were unable to recover it. 00:29:43.747 [2024-11-06 13:26:25.464675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.464704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.464944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.464974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.465215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.465243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.465472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.465501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.465767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.465798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.466033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.466062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.466322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.466351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.466455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.466485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.466841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.466871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.467224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.467262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.467482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.467511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.467800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.467829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.468127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.468156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.468534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.468562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.468663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.468690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.469051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.469082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.469296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.469324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.469568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.469597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.469856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.469886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.470132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.470162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.470442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.470472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.470728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.470766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.471225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.471260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.471606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.471641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.471859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.471889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.472140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.472169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.472529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.472558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.472925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.472956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.473352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.473380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.473761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.473791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.474010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.474038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.474404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.474432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.474809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.474840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.475106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.475135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.475498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.475526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.475903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.475933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.476381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.476409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.476775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.476805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.748 qpair failed and we were unable to recover it. 00:29:43.748 [2024-11-06 13:26:25.477064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.748 [2024-11-06 13:26:25.477094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.477467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.477496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.477888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.477918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.478302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.478329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.478708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.478737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.479098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.479126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.479385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.479413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.479765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.479796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.480176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.480204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.480591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.480620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.480789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.480819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.481146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.481174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.481429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.481457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.481551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.481578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb93c000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.481991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.482092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.482393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.482429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.482717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.482771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.483085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.483114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.483334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.483363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.483589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.483618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.483872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.483903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.484039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.484074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.484332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.484360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.484594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.484622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.484981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.485022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.485389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.485419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.485678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.485707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.486093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.486124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.486512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.486541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.486903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.486933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.487165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.487193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.487368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.487396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.487776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.487808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.488064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.488093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.488517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.488546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.488763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.488794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.489072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.489101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.489452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.489482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.489723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.749 [2024-11-06 13:26:25.489762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.749 qpair failed and we were unable to recover it. 00:29:43.749 [2024-11-06 13:26:25.490145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.490174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.490423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.490452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.490858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.490888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.491267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.491295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.491657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.491692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.492115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.492147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.492504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.492536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.492907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.492937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.493361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.493390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.493627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.493657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.493923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.493954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.494327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.494359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.494481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.494512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.494782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.494812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.495210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.495240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.495508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.495537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.495917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.495947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.496193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.496221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.496446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.496477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.496724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.496763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.496864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.496892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.497224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.497252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.497507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.497536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.497938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.497970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.498350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.498380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.498765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.498802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.499185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.499215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.499442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.499470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.499731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.499771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.500095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.500124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.500350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.500381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.500568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.500597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.500859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.500889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.501113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.501143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.501524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.501554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.501790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.501821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.502215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.750 [2024-11-06 13:26:25.502245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.750 qpair failed and we were unable to recover it. 00:29:43.750 [2024-11-06 13:26:25.502698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.502728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.503077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.503107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.503493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.503524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.503752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.503782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.503900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.503933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.504186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.504223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.504539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.504569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.504947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.504980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.505273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.505302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.505682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.505720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.506021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.506051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.506391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.506420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.506803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.506834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.507045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.507073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.507443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.507470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.507858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.507888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.508296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.508325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.508672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.508701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.509116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.509147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.509525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.509554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.509808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.509838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.510054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.510084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.510350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.510379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.510732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.510774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.510990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.511018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.511401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.511430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.511807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.511838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.512288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.512317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.512686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.512729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.512983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.513382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.513412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.513765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.513795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.514199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.514228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.514444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.514472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.514838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.751 [2024-11-06 13:26:25.514868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.751 qpair failed and we were unable to recover it. 00:29:43.751 [2024-11-06 13:26:25.515149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.515177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.515560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.515589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.515961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.515991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.516198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.516227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.516604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.516634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.517062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.517093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.517453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.517482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.517742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.517780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.518166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.518195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.518425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.518456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.518841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.518871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.519256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.519284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.519659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.519688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.520056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.520088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.520314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.520342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.520702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.520733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.521120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.521150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.521409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.521441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.521656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.521686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.521932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.521965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.522203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.522231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.522513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.522543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.522794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.522825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.523189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.523220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.523448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.523477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.523815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.523845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.524242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.524272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.524654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.524683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.525058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.525088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.525444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.525474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.525845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.525875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.526085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.526115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.526210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.526237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.526570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.526676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.527215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.527322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5c8010 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.527712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.527743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.528080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.528109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.528472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.528502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.752 [2024-11-06 13:26:25.528885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.752 [2024-11-06 13:26:25.528916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.752 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.529158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.529187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.529427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.529456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.529829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.529860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.530096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.530126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.530346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.530375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.530833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.530864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.531231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.531260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.531679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.531709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.532141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.532172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.532392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.532420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.532705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.532734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.533115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.533144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.533505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.533535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.533782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.533817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.534040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.534068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.534262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.534290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.534640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.534670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.535036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.535067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.535279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.535308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.535681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.535710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.536051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.536083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.536468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.536497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.536850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.536881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.537269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.537299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.537664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.537693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.538070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.538102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.538341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.538370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.538742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.538782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.539043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.539073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.539302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.539334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.539711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.539740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.540186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.540217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.540599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.540628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.541011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.541040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.541255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.541293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.541534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.541566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.541798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.541827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.542196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.753 [2024-11-06 13:26:25.542225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.753 qpair failed and we were unable to recover it. 00:29:43.753 [2024-11-06 13:26:25.542321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.542347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.542487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.542516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.542876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.542906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.543292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.543321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.543684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.543712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.544106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.544135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.544498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.544526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.544765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.544796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.545161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.545190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.545403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.545432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.545655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.545685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.545892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.545922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.546128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.546156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.546506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.546535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.546897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.546926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.547159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.547187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.547572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.547600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.547829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.547858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.547957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.547986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.548125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.548153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.548512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.548541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.548767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.548800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.549172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.549200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.549459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.549488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.549700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.549728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.549957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.549987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.550350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.550378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.550767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.550796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.551163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.551192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.551577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.551605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.551984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.552014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.552248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.552276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.552641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.552669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.553047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.553076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.553461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.553488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.553842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.553872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.554257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.554292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.554531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.554559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.754 qpair failed and we were unable to recover it. 00:29:43.754 [2024-11-06 13:26:25.554829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.754 [2024-11-06 13:26:25.554858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.555227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.555256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.555635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.555663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.556037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.556067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.556436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.556464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.556696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.556724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.556974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.557002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.557097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.557124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.557406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.557435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.557819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.557848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.558044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.558072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.558438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.558466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.558822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.558852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.559225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.559254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.559675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.559704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.559940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.559969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.560342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.560370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.560799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.560831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.561063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.561092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.561364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.561392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.561638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.561669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.561902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.561931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.562174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.562202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.562561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.562590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.562971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.563001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.563374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.563403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.563759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.563789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.564162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.564190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.564562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.564590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.564940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.564969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.565346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.565374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.565611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.565639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.566008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.566038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.755 qpair failed and we were unable to recover it. 00:29:43.755 [2024-11-06 13:26:25.566388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.755 [2024-11-06 13:26:25.566418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.566764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.566793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.567031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.567059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.567432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.567460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.567840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.567870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.568234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.568268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.568618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.568647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.568868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.568897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.569265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.569294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.569671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.569699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.570059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.570090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.570367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.570395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.570776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.570807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.571073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.571101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.571454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.571483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.571615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.571642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.571995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.572024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.572316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.572344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.572580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.572610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.573020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.573049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.573429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.573457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.573827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.573856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.574125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.574153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.574521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.574550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.574759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.574789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.575212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.575240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.575487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.575518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.575614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.575641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.575964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.575995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.576356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.576385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.576775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.576804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.577028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.577056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.577404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.577438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.577693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.577721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.578052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.578082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.578319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.578350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.578720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.578759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 [2024-11-06 13:26:25.578888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.756 [2024-11-06 13:26:25.578916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.756 qpair failed and we were unable to recover it. 00:29:43.756 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:43.756 [2024-11-06 13:26:25.579147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.579176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:43.757 [2024-11-06 13:26:25.579526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.579556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.757 [2024-11-06 13:26:25.579912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.579943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:43.757 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.757 [2024-11-06 13:26:25.580268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.580298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.580570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.580599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.581007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.581043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.581317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.581347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.581581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.581611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.581974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.582003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.582356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.582386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.582734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.582774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.583010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.583040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.583331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.583360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.583734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.583779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.584029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.584058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.584437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.584466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.584702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.584732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.585145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.585176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.585559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.585588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.585965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.585997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.586231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.586261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.586641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.586671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.587024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.587054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.587413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.587443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.587545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.587575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.587868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.587898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.588239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.588271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.588485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.588513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.588765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.588797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.589184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.589214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.589583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.589613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.589999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.590030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.590403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.590433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.590681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.590708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.590810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.590838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb934000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.591387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.591490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.757 [2024-11-06 13:26:25.592098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.757 [2024-11-06 13:26:25.592203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.757 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.592516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.592553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.593066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.593169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.593621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.593659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.594073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.594181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.594636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.594674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.594934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.594967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.595352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.595383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.595767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.595799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.596169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.596213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.596570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.596600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.596824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.596859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.597216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.597246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.597618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.597649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.598083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.598116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.598469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.598498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.598860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.598893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.599168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.599204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.599549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.599581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.599831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.599863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.600246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.600278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.600637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.600667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.601024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.601056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.601408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.601440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.601802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.601835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.602215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.602245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.602619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.602648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.602864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.602894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.603225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.603257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.603700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.603730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.604176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.604206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.604428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.604457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.604830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.604861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.605228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.605258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.605500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.605529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.605913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.605946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.606173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.606203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.606439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.606468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.758 [2024-11-06 13:26:25.606869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.758 [2024-11-06 13:26:25.606901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.758 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.607289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.607318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.607530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.607558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.607927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.607957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.608318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.608348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.608728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.608772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.609101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.609131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.609470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.609500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.609854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.609885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.610234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.610262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.610625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.610656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.611054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.611090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.611445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.611475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.611835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.611866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.612221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.612254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.612344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.612371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.612617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.612646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.612779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.612807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.613082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.613112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.613357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.613386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.613633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.613661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.613869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.613900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.614235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.614265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.614631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.614662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.615101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.615132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.615344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.615374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.615773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.615804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.616165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.616194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.616554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.616584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.616822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.616853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.617208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.617237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.617609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.617640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.618018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.618049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.618396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.618425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.618796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.618827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.619183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.619211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.619589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.619618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.619958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.619988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.620219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.759 [2024-11-06 13:26:25.620250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.759 qpair failed and we were unable to recover it. 00:29:43.759 [2024-11-06 13:26:25.620610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.620640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 [2024-11-06 13:26:25.621018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.621047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 [2024-11-06 13:26:25.621403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.621433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 [2024-11-06 13:26:25.621787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.621820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 [2024-11-06 13:26:25.622229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.622259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 [2024-11-06 13:26:25.622508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.622536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 [2024-11-06 13:26:25.622768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.622799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 [2024-11-06 13:26:25.623156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.760 [2024-11-06 13:26:25.623188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 [2024-11-06 13:26:25.623584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.623614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b9 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.760 0 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.760 [2024-11-06 13:26:25.623970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.624003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:43.760 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.760 [2024-11-06 13:26:25.624220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.760 [2024-11-06 13:26:25.624251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:43.760 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.624522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.624555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.624909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.624939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.625323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.625352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.625714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.625765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.626001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.626030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.626466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.626495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.626861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.626892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.627279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.627308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.627679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.627708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.628099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.628129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.628497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.628527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.628899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.628930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.629281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.629310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.629640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.629669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.629917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.629946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.630319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.630349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.630733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.630773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.631179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.631207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.631457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.631486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.631909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.631939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.632186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.632215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.632441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.632470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.632694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.026 [2024-11-06 13:26:25.632722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.026 qpair failed and we were unable to recover it. 00:29:44.026 [2024-11-06 13:26:25.633137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.633167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.633410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.633440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.633847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.633877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.634252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.634287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.634646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.634674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.635041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.635071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.635313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.635341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.635721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.635757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.636172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.636202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.636424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.636452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.636695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.636723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.637028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.637058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.637269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.637298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.637558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.637591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.637845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.637878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.638236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.638266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.638494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.638522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.638923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.638954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.639325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.639354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.639605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.639634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.639960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.639989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.640365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.640393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.640764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.640794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.641156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.641184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.641413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.641441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.641805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.641835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.642288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.642317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.642561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.642589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.642818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.642848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.643068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.643096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.643207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.643234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.643581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.643609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.644027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.644067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.644315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.644344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.644684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.644712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.645108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.645138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.645479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.645507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.645869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.027 [2024-11-06 13:26:25.645900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.027 qpair failed and we were unable to recover it. 00:29:44.027 [2024-11-06 13:26:25.646256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.646285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.646651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.646680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.647029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.647058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.647445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.647475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.647784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.647813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.648186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.648222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.648580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.648608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.649011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.649041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.649423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.649450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.649675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.649703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.650002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.650031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.650321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.650351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.650584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.650616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.650856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.650889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.651248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.651278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.651505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.651533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.651897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.651926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.652176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.652204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.652568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.652597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.652846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.652880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.653270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.653298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.653653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.653682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.654051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.654081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.654309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.654337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.654695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.654723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.655147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.655177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.655347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.655375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.655774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.655805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.656023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.656051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.656263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.656291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.656494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.656523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.656787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.656819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.657196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.657225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.657465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.657494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.657896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.657927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.658299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.658327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.658684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.658713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.658966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.028 [2024-11-06 13:26:25.658997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.028 qpair failed and we were unable to recover it. 00:29:44.028 [2024-11-06 13:26:25.659091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.659118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.659408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.659436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.659652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.659681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.659908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.659938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.660168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.660197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.660583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.660611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.660833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.660863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.661137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.661177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.661391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.661419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.661841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.661872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.662274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.662303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.662665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.662694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.662942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.662972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.663378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.663407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.663666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.663696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.664051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.664081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.664443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.664473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 Malloc0 00:29:44.029 [2024-11-06 13:26:25.664843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.664876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.665263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.665291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.029 [2024-11-06 13:26:25.665640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.665670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:44.029 [2024-11-06 13:26:25.666030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.666063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.029 [2024-11-06 13:26:25.666424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.029 [2024-11-06 13:26:25.666456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.666735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.666772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.667040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.667072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.667272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.667301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.667678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.667708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.667906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.667935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.668293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.668321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.668577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.668608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.668807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.668838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.669081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.669110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.669510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.669539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.669758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.669788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.670193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.670221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.670463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.670491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.670869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.029 [2024-11-06 13:26:25.670899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.029 qpair failed and we were unable to recover it. 00:29:44.029 [2024-11-06 13:26:25.671276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.671305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.671575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.671603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.671971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.672001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.672179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.030 [2024-11-06 13:26:25.672236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.672268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.672622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.672652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.673004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.673035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.673378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.673415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.673636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.673668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.674026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.674057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.674427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.674464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.674831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.674860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.675151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.675180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.675546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.675575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.675922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.675953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.676316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.676344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.676594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.676623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.676834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.676864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.677248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.677276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.677405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.677432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.677775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.677805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.678165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.678193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.678576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.678605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.679052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.679081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.679316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.679344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.679724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.679763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.680179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.680208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.680602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.680631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.680986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.681017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.030 [2024-11-06 13:26:25.681410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.681441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:44.030 [2024-11-06 13:26:25.681725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.681767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.030 [2024-11-06 13:26:25.681930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.030 [2024-11-06 13:26:25.681959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.030 qpair failed and we were unable to recover it. 00:29:44.031 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.031 [2024-11-06 13:26:25.682198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.031 [2024-11-06 13:26:25.682229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.682593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.682622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.682860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.682889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.683255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.683284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.683650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.683680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.684041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.684071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.684483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.684513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.684720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.684771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.685013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.685042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.685289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.685317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.685571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.685602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.685852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.685883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.686263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.686291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.686659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.686687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.686921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.686951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.687320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.687349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.687715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.687767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.688184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.688213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.688575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.688606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.689028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.689059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.689413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.689441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.689826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.689855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.689957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.689985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.690397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.690425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.690803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.690833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.690950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.690982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.691378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.691408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.691774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.691804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.692086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.692116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.692499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.692528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.692889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.692920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 [2024-11-06 13:26:25.693173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.693203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.031 [2024-11-06 13:26:25.693580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.693610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:44.031 [2024-11-06 13:26:25.693888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.693919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.031 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.031 [2024-11-06 13:26:25.694312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.031 [2024-11-06 13:26:25.694343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.031 qpair failed and we were unable to recover it. 00:29:44.032 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.032 [2024-11-06 13:26:25.694627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.694657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.695068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.695100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.695349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.695379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.695743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.695784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.696171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.696201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.696601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.696631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.696867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.696898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.697296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.697326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.697511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.697540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.697924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.697956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.698335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.698365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.698619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.698648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.698992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.699022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.699157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.699187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.699525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.699554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.699935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.699966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.700331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.700360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.700730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.700767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.701131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.701160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.701398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.701431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.701786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.701818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.702202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.702232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.702482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.702512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.702819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.702850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.703121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.703150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.703522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.703552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.703765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.703796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.704011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.704041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.704445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.704475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.704759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.704791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.705022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.705051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 [2024-11-06 13:26:25.705283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.705312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.032 [2024-11-06 13:26:25.705579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.705613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.032 [2024-11-06 13:26:25.705881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.705912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.032 [2024-11-06 13:26:25.706173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.706202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.032 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.032 [2024-11-06 13:26:25.706461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.032 [2024-11-06 13:26:25.706491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.032 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.706854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.706884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.707288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.707317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.707583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.707612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.707958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.707989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.708390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.708420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.708805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.708834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.709121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.709150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.709418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.709447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.709796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.709832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.710187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.710216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.710602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.710632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.710976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.711006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.711422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.711452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.711836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.711868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.712129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.712157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.712401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.033 [2024-11-06 13:26:25.712430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb930000b90 with addr=10.0.0.2, port=4420 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.712563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.033 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.033 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:44.033 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.033 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.033 [2024-11-06 13:26:25.723492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.033 [2024-11-06 13:26:25.723623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.033 [2024-11-06 13:26:25.723671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.033 [2024-11-06 13:26:25.723694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.033 [2024-11-06 13:26:25.723714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.033 [2024-11-06 13:26:25.723778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.033 13:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1917336 00:29:44.033 [2024-11-06 13:26:25.733332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.033 [2024-11-06 13:26:25.733438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.033 [2024-11-06 13:26:25.733468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.033 [2024-11-06 13:26:25.733485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.033 [2024-11-06 13:26:25.733499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.033 [2024-11-06 13:26:25.733532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.743230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.033 [2024-11-06 13:26:25.743305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.033 [2024-11-06 13:26:25.743329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.033 [2024-11-06 13:26:25.743343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.033 [2024-11-06 13:26:25.743355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.033 [2024-11-06 13:26:25.743377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.753359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.033 [2024-11-06 13:26:25.753438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.033 [2024-11-06 13:26:25.753454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.033 [2024-11-06 13:26:25.753461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.033 [2024-11-06 13:26:25.753468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.033 [2024-11-06 13:26:25.753484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.763222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.033 [2024-11-06 13:26:25.763295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.033 [2024-11-06 13:26:25.763312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.033 [2024-11-06 13:26:25.763319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.033 [2024-11-06 13:26:25.763325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.033 [2024-11-06 13:26:25.763342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.773302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.033 [2024-11-06 13:26:25.773378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.033 [2024-11-06 13:26:25.773395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.033 [2024-11-06 13:26:25.773403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.033 [2024-11-06 13:26:25.773409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.033 [2024-11-06 13:26:25.773425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.033 qpair failed and we were unable to recover it. 00:29:44.033 [2024-11-06 13:26:25.783180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.033 [2024-11-06 13:26:25.783253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.033 [2024-11-06 13:26:25.783269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.033 [2024-11-06 13:26:25.783276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.783283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.783299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.793356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.793434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.793451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.793460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.793466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.793482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.803458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.803534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.803551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.803558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.803565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.803582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.813441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.813503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.813519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.813532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.813538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.813554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.823450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.823519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.823535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.823542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.823549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.823564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.833329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.833400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.833417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.833424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.833431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.833446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.843532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.843599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.843616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.843623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.843629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.843645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.853558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.853625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.853645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.853652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.853659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.853681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.863566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.863637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.863655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.863662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.863669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.863685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.873506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.873597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.873613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.873621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.873628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.873643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.883630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.883705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.883721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.883729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.883736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.883756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.893613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.893684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.893699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.893707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.893714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.893730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.903672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.903741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.903763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.903771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.903777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.034 [2024-11-06 13:26:25.903793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.034 qpair failed and we were unable to recover it. 00:29:44.034 [2024-11-06 13:26:25.913911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.034 [2024-11-06 13:26:25.914026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.034 [2024-11-06 13:26:25.914042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.034 [2024-11-06 13:26:25.914049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.034 [2024-11-06 13:26:25.914055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.035 [2024-11-06 13:26:25.914071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.035 qpair failed and we were unable to recover it. 00:29:44.298 [2024-11-06 13:26:25.923817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.298 [2024-11-06 13:26:25.923888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.298 [2024-11-06 13:26:25.923904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.298 [2024-11-06 13:26:25.923912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.298 [2024-11-06 13:26:25.923919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.298 [2024-11-06 13:26:25.923934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.298 qpair failed and we were unable to recover it. 00:29:44.298 [2024-11-06 13:26:25.933849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.298 [2024-11-06 13:26:25.933909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.298 [2024-11-06 13:26:25.933925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.298 [2024-11-06 13:26:25.933932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.298 [2024-11-06 13:26:25.933939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.298 [2024-11-06 13:26:25.933955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.298 qpair failed and we were unable to recover it. 00:29:44.298 [2024-11-06 13:26:25.943928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.298 [2024-11-06 13:26:25.943993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.298 [2024-11-06 13:26:25.944014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.298 [2024-11-06 13:26:25.944021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:25.944028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:25.944043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:25.953903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:25.953972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:25.953987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:25.953994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:25.954001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:25.954017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:25.963921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:25.963999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:25.964015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:25.964022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:25.964028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:25.964043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:25.973802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:25.973889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:25.973904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:25.973911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:25.973917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:25.973933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:25.983932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:25.983998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:25.984014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:25.984021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:25.984032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:25.984048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:25.993986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:25.994057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:25.994073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:25.994081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:25.994088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:25.994103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:26.004100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:26.004170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:26.004186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:26.004193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:26.004200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:26.004215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:26.013909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:26.013970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:26.013985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:26.013992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:26.013999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:26.014014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:26.024087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:26.024150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:26.024165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:26.024172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:26.024179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:26.024194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:26.034110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:26.034176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:26.034192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:26.034199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:26.034205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:26.034220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:26.044185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:26.044262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:26.044277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:26.044285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:26.044291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:26.044306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:26.054138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:26.054202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:26.054218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:26.054225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:26.054232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:26.054248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:26.064196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:26.064259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:26.064278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:26.064287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:26.064294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.299 [2024-11-06 13:26:26.064312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.299 qpair failed and we were unable to recover it. 00:29:44.299 [2024-11-06 13:26:26.074191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.299 [2024-11-06 13:26:26.074259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.299 [2024-11-06 13:26:26.074282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.299 [2024-11-06 13:26:26.074289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.299 [2024-11-06 13:26:26.074295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.074312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.084289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.084362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.084382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.084392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.084400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.084417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.094314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.094397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.094413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.094421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.094427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.094444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.104314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.104377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.104393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.104400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.104406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.104423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.114377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.114447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.114463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.114470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.114484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.114501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.124434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.124517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.124534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.124544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.124552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.124568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.134405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.134510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.134526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.134533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.134539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.134555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.144468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.144531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.144547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.144554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.144560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.144576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.154421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.154545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.154561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.154568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.154575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.154591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.164580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.164659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.164676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.164684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.164691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.164706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.174595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.174682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.174698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.174705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.174712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.174727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.184567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.184629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.184644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.184652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.184658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.184674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.300 [2024-11-06 13:26:26.194612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.300 [2024-11-06 13:26:26.194708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.300 [2024-11-06 13:26:26.194723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.300 [2024-11-06 13:26:26.194730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.300 [2024-11-06 13:26:26.194754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.300 [2024-11-06 13:26:26.194771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.300 qpair failed and we were unable to recover it. 00:29:44.563 [2024-11-06 13:26:26.204679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.563 [2024-11-06 13:26:26.204798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.563 [2024-11-06 13:26:26.204820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.563 [2024-11-06 13:26:26.204827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.563 [2024-11-06 13:26:26.204834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.563 [2024-11-06 13:26:26.204850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.563 qpair failed and we were unable to recover it. 00:29:44.563 [2024-11-06 13:26:26.214652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.563 [2024-11-06 13:26:26.214713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.563 [2024-11-06 13:26:26.214730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.563 [2024-11-06 13:26:26.214738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.563 [2024-11-06 13:26:26.214744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.563 [2024-11-06 13:26:26.214767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.563 qpair failed and we were unable to recover it. 00:29:44.563 [2024-11-06 13:26:26.224562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.563 [2024-11-06 13:26:26.224634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.563 [2024-11-06 13:26:26.224650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.563 [2024-11-06 13:26:26.224658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.563 [2024-11-06 13:26:26.224664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.563 [2024-11-06 13:26:26.224680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.563 qpair failed and we were unable to recover it. 00:29:44.563 [2024-11-06 13:26:26.234735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.563 [2024-11-06 13:26:26.234811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.563 [2024-11-06 13:26:26.234827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.563 [2024-11-06 13:26:26.234835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.563 [2024-11-06 13:26:26.234842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.563 [2024-11-06 13:26:26.234859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.563 qpair failed and we were unable to recover it. 00:29:44.563 [2024-11-06 13:26:26.244817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.563 [2024-11-06 13:26:26.244896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.563 [2024-11-06 13:26:26.244913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.563 [2024-11-06 13:26:26.244925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.563 [2024-11-06 13:26:26.244931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.563 [2024-11-06 13:26:26.244948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.563 qpair failed and we were unable to recover it. 00:29:44.563 [2024-11-06 13:26:26.254785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.563 [2024-11-06 13:26:26.254850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.563 [2024-11-06 13:26:26.254867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.563 [2024-11-06 13:26:26.254874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.563 [2024-11-06 13:26:26.254880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.563 [2024-11-06 13:26:26.254896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.563 qpair failed and we were unable to recover it. 00:29:44.563 [2024-11-06 13:26:26.264807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.563 [2024-11-06 13:26:26.264869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.563 [2024-11-06 13:26:26.264885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.563 [2024-11-06 13:26:26.264892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.563 [2024-11-06 13:26:26.264899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.563 [2024-11-06 13:26:26.264915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.563 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.274858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.274929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.274948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.274955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.274961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.274978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.284837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.284909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.284925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.284932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.284938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.284959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.294918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.294992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.295008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.295015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.295022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.295038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.304952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.305022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.305038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.305045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.305052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.305069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.315007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.315103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.315119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.315126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.315132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.315148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.324916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.324991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.325007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.325014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.325021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.325037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.334928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.335045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.335066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.335073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.335080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.335097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.345058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.345154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.345172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.345179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.345186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.345201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.355143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.355231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.355247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.355254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.355260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.355276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.365158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.365236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.365252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.365259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.365266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.365281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.375171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.375286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.375303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.375315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.375322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.375338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.385085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.385150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.385166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.385173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.385179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.385195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.395232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.564 [2024-11-06 13:26:26.395316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.564 [2024-11-06 13:26:26.395332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.564 [2024-11-06 13:26:26.395339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.564 [2024-11-06 13:26:26.395345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.564 [2024-11-06 13:26:26.395360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.564 qpair failed and we were unable to recover it. 00:29:44.564 [2024-11-06 13:26:26.405168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.565 [2024-11-06 13:26:26.405275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.565 [2024-11-06 13:26:26.405295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.565 [2024-11-06 13:26:26.405302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.565 [2024-11-06 13:26:26.405308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.565 [2024-11-06 13:26:26.405325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.565 qpair failed and we were unable to recover it. 00:29:44.565 [2024-11-06 13:26:26.415279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.565 [2024-11-06 13:26:26.415340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.565 [2024-11-06 13:26:26.415357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.565 [2024-11-06 13:26:26.415365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.565 [2024-11-06 13:26:26.415371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.565 [2024-11-06 13:26:26.415392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.565 qpair failed and we were unable to recover it. 00:29:44.565 [2024-11-06 13:26:26.425281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.565 [2024-11-06 13:26:26.425344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.565 [2024-11-06 13:26:26.425361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.565 [2024-11-06 13:26:26.425368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.565 [2024-11-06 13:26:26.425374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.565 [2024-11-06 13:26:26.425390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.565 qpair failed and we were unable to recover it. 00:29:44.565 [2024-11-06 13:26:26.435343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.565 [2024-11-06 13:26:26.435459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.565 [2024-11-06 13:26:26.435475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.565 [2024-11-06 13:26:26.435482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.565 [2024-11-06 13:26:26.435489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.565 [2024-11-06 13:26:26.435504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.565 qpair failed and we were unable to recover it. 00:29:44.565 [2024-11-06 13:26:26.445363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.565 [2024-11-06 13:26:26.445434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.565 [2024-11-06 13:26:26.445450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.565 [2024-11-06 13:26:26.445457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.565 [2024-11-06 13:26:26.445463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.565 [2024-11-06 13:26:26.445479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.565 qpair failed and we were unable to recover it. 00:29:44.565 [2024-11-06 13:26:26.455369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.565 [2024-11-06 13:26:26.455462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.565 [2024-11-06 13:26:26.455478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.565 [2024-11-06 13:26:26.455485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.565 [2024-11-06 13:26:26.455491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.565 [2024-11-06 13:26:26.455507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.565 qpair failed and we were unable to recover it. 00:29:44.827 [2024-11-06 13:26:26.465370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.827 [2024-11-06 13:26:26.465448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.827 [2024-11-06 13:26:26.465464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.827 [2024-11-06 13:26:26.465471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.827 [2024-11-06 13:26:26.465478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.827 [2024-11-06 13:26:26.465493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.827 qpair failed and we were unable to recover it. 00:29:44.827 [2024-11-06 13:26:26.475430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.827 [2024-11-06 13:26:26.475509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.827 [2024-11-06 13:26:26.475528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.827 [2024-11-06 13:26:26.475535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.827 [2024-11-06 13:26:26.475545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.827 [2024-11-06 13:26:26.475563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.827 qpair failed and we were unable to recover it. 00:29:44.827 [2024-11-06 13:26:26.485492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.827 [2024-11-06 13:26:26.485559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.827 [2024-11-06 13:26:26.485578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.827 [2024-11-06 13:26:26.485586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.827 [2024-11-06 13:26:26.485592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.827 [2024-11-06 13:26:26.485609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.827 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.495498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.495569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.495585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.495593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.495599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.495615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.505539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.505602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.505623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.505631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.505637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.505653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.515457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.515532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.515548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.515555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.515561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.515577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.525567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.525644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.525660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.525667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.525673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.525690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.535475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.535530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.535546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.535554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.535561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.535576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.545499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.545569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.545585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.545592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.545604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.545621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.555686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.555796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.555813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.555821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.555828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.555845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.565755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.565830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.565846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.565854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.565861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.565877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.575731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.575838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.575854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.575861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.575868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.575884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.585735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.585798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.585814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.585821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.585828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.585843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.595808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.595885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.595904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.595913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.595921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.595940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.605825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.605900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.605918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.605925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.605931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.605948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.615844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.828 [2024-11-06 13:26:26.615915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.828 [2024-11-06 13:26:26.615932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.828 [2024-11-06 13:26:26.615939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.828 [2024-11-06 13:26:26.615946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.828 [2024-11-06 13:26:26.615961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.828 qpair failed and we were unable to recover it. 00:29:44.828 [2024-11-06 13:26:26.625866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.625937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.625953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.625960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.625967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.625983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:44.829 [2024-11-06 13:26:26.635906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.635975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.635996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.636004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.636010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.636026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:44.829 [2024-11-06 13:26:26.645966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.646036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.646052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.646059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.646065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.646081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:44.829 [2024-11-06 13:26:26.655945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.656011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.656026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.656033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.656040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.656055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:44.829 [2024-11-06 13:26:26.665969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.666032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.666048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.666056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.666062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.666078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:44.829 [2024-11-06 13:26:26.676068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.676139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.676155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.676162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.676173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.676189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:44.829 [2024-11-06 13:26:26.686112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.686241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.686260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.686267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.686273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.686290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:44.829 [2024-11-06 13:26:26.696092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.696159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.696175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.696183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.696189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.696205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:44.829 [2024-11-06 13:26:26.706096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.706166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.706182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.706190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.706196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.706211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:44.829 [2024-11-06 13:26:26.716039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.829 [2024-11-06 13:26:26.716159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.829 [2024-11-06 13:26:26.716177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.829 [2024-11-06 13:26:26.716184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.829 [2024-11-06 13:26:26.716190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:44.829 [2024-11-06 13:26:26.716221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:44.829 qpair failed and we were unable to recover it. 00:29:45.091 [2024-11-06 13:26:26.726071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.726145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.726162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.726169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.726175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.726191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.736220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.736312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.736328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.736335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.736341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.736357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.746213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.746279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.746296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.746303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.746310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.746326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.756258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.756325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.756341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.756348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.756355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.756371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.766192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.766280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.766305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.766319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.766325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.766343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.776199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.776260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.776278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.776285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.776292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.776308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.786346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.786411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.786429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.786437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.786444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.786461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.796377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.796459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.796476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.796483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.796489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.796505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.806433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.806509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.806524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.806536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.806543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.806558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.816461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.816527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.816543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.816550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.816557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.816573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.826432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.826545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.826561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.826568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.826575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.826591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.836390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.836457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.836473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.836480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.836486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.836503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.092 qpair failed and we were unable to recover it. 00:29:45.092 [2024-11-06 13:26:26.846578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.092 [2024-11-06 13:26:26.846685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.092 [2024-11-06 13:26:26.846702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.092 [2024-11-06 13:26:26.846710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.092 [2024-11-06 13:26:26.846716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.092 [2024-11-06 13:26:26.846743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.856571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.856684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.856700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.856708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.856714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.856730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.866636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.866721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.866738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.866751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.866758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.866775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.876641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.876708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.876724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.876731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.876737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.876757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.886696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.886768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.886784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.886791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.886798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.886813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.896737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.896831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.896850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.896858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.896864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.896881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.906693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.906756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.906773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.906780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.906787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.906803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.916760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.916830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.916846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.916854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.916860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.916875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.926825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.926900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.926916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.926923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.926929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.926945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.936806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.936865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.936882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.936894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.936900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.936916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.946830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.946890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.946907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.946914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.946921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.946937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.956885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.956959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.956976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.956983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.956989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.957006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.966932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.967009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.967026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.967033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.967041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.967057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.093 [2024-11-06 13:26:26.976932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.093 [2024-11-06 13:26:26.976995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.093 [2024-11-06 13:26:26.977011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.093 [2024-11-06 13:26:26.977019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.093 [2024-11-06 13:26:26.977027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.093 [2024-11-06 13:26:26.977048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.093 qpair failed and we were unable to recover it. 00:29:45.094 [2024-11-06 13:26:26.986948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.094 [2024-11-06 13:26:26.987006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.094 [2024-11-06 13:26:26.987022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.094 [2024-11-06 13:26:26.987029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.094 [2024-11-06 13:26:26.987035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.094 [2024-11-06 13:26:26.987051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.094 qpair failed and we were unable to recover it. 00:29:45.356 [2024-11-06 13:26:26.996996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.356 [2024-11-06 13:26:26.997061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.356 [2024-11-06 13:26:26.997076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.356 [2024-11-06 13:26:26.997084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.356 [2024-11-06 13:26:26.997090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.356 [2024-11-06 13:26:26.997106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.356 qpair failed and we were unable to recover it. 00:29:45.356 [2024-11-06 13:26:27.007039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.356 [2024-11-06 13:26:27.007110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.356 [2024-11-06 13:26:27.007125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.356 [2024-11-06 13:26:27.007133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.356 [2024-11-06 13:26:27.007139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.356 [2024-11-06 13:26:27.007154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.356 qpair failed and we were unable to recover it. 00:29:45.356 [2024-11-06 13:26:27.017096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.356 [2024-11-06 13:26:27.017205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.356 [2024-11-06 13:26:27.017221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.356 [2024-11-06 13:26:27.017228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.356 [2024-11-06 13:26:27.017235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.356 [2024-11-06 13:26:27.017251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.356 qpair failed and we were unable to recover it. 00:29:45.356 [2024-11-06 13:26:27.026948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.356 [2024-11-06 13:26:27.027019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.356 [2024-11-06 13:26:27.027035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.356 [2024-11-06 13:26:27.027042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.356 [2024-11-06 13:26:27.027048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.356 [2024-11-06 13:26:27.027064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.356 qpair failed and we were unable to recover it. 00:29:45.356 [2024-11-06 13:26:27.037150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.356 [2024-11-06 13:26:27.037245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.356 [2024-11-06 13:26:27.037260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.356 [2024-11-06 13:26:27.037267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.356 [2024-11-06 13:26:27.037275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.356 [2024-11-06 13:26:27.037290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.356 qpair failed and we were unable to recover it. 00:29:45.356 [2024-11-06 13:26:27.047151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.356 [2024-11-06 13:26:27.047226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.356 [2024-11-06 13:26:27.047242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.356 [2024-11-06 13:26:27.047249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.356 [2024-11-06 13:26:27.047255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.356 [2024-11-06 13:26:27.047271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.356 qpair failed and we were unable to recover it. 00:29:45.356 [2024-11-06 13:26:27.057155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.356 [2024-11-06 13:26:27.057217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.356 [2024-11-06 13:26:27.057233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.356 [2024-11-06 13:26:27.057240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.356 [2024-11-06 13:26:27.057246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.356 [2024-11-06 13:26:27.057262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.356 qpair failed and we were unable to recover it. 00:29:45.356 [2024-11-06 13:26:27.067198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.067259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.067279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.067286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.067292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.067308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.077320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.077409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.077426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.077433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.077439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.077455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.087317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.087395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.087411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.087418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.087424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.087440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.097316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.097375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.097391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.097398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.097404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.097420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.107339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.107401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.107418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.107426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.107437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.107454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.117397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.117515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.117532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.117539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.117546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.117562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.127298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.127375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.127390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.127398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.127404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.127419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.137441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.137507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.137523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.137530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.137536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.137552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.147468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.147537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.147553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.147560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.147566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.147581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.157379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.157452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.157485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.157495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.157503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.157526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.167436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.167513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.167533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.167540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.167547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.167564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.177561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.177642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.177660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.177668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.177676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.177693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.187436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.357 [2024-11-06 13:26:27.187510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.357 [2024-11-06 13:26:27.187526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.357 [2024-11-06 13:26:27.187534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.357 [2024-11-06 13:26:27.187540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.357 [2024-11-06 13:26:27.187556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.357 qpair failed and we were unable to recover it. 00:29:45.357 [2024-11-06 13:26:27.197627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.358 [2024-11-06 13:26:27.197693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.358 [2024-11-06 13:26:27.197716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.358 [2024-11-06 13:26:27.197723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.358 [2024-11-06 13:26:27.197729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.358 [2024-11-06 13:26:27.197752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.358 qpair failed and we were unable to recover it. 00:29:45.358 [2024-11-06 13:26:27.207557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.358 [2024-11-06 13:26:27.207631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.358 [2024-11-06 13:26:27.207647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.358 [2024-11-06 13:26:27.207655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.358 [2024-11-06 13:26:27.207661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.358 [2024-11-06 13:26:27.207677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.358 qpair failed and we were unable to recover it. 00:29:45.358 [2024-11-06 13:26:27.217685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.358 [2024-11-06 13:26:27.217791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.358 [2024-11-06 13:26:27.217807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.358 [2024-11-06 13:26:27.217815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.358 [2024-11-06 13:26:27.217821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.358 [2024-11-06 13:26:27.217837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.358 qpair failed and we were unable to recover it. 00:29:45.358 [2024-11-06 13:26:27.227720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.358 [2024-11-06 13:26:27.227792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.358 [2024-11-06 13:26:27.227808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.358 [2024-11-06 13:26:27.227815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.358 [2024-11-06 13:26:27.227822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.358 [2024-11-06 13:26:27.227838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.358 qpair failed and we were unable to recover it. 00:29:45.358 [2024-11-06 13:26:27.237736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.358 [2024-11-06 13:26:27.237809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.358 [2024-11-06 13:26:27.237825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.358 [2024-11-06 13:26:27.237832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.358 [2024-11-06 13:26:27.237844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.358 [2024-11-06 13:26:27.237861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.358 qpair failed and we were unable to recover it. 00:29:45.358 [2024-11-06 13:26:27.247783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.358 [2024-11-06 13:26:27.247869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.358 [2024-11-06 13:26:27.247885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.358 [2024-11-06 13:26:27.247892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.358 [2024-11-06 13:26:27.247898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.358 [2024-11-06 13:26:27.247914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.358 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.257800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.257898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.257913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.257921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.257927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.257943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.620 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.267801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.267867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.267882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.267889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.267896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.267912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.620 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.277887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.277951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.277968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.277976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.277982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.277998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.620 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.287972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.288046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.288062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.288069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.288076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.288091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.620 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.297904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.297958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.297973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.297980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.297987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.298002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.620 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.308217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.308293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.308326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.308334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.308340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.308364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.620 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.318012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.318085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.318102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.318110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.318116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.318132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.620 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.328072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.328180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.328199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.328207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.328213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.328228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.620 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.337913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.337968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.337983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.337990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.337996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.338011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.620 qpair failed and we were unable to recover it. 00:29:45.620 [2024-11-06 13:26:27.347999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.620 [2024-11-06 13:26:27.348057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.620 [2024-11-06 13:26:27.348071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.620 [2024-11-06 13:26:27.348079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.620 [2024-11-06 13:26:27.348085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.620 [2024-11-06 13:26:27.348100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.358110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.358187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.358201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.358208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.358215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.358229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.367975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.368028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.368042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.368059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.368065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.368080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.378104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.378158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.378172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.378179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.378186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.378200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.388127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.388172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.388186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.388193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.388199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.388214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.398055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.398111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.398124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.398131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.398138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.398152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.408198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.408252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.408266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.408273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.408279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.408297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.418196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.418245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.418258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.418265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.418271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.418285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.428210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.428263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.428276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.428283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.428289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.428303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.438352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.438406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.438419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.438426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.438432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.438446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.448262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.448317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.448330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.448337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.448343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.448357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.458303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.458356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.458369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.458376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.458383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.458397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.468332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.468382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.468396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.468403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.468409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.468423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.621 [2024-11-06 13:26:27.478400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.621 [2024-11-06 13:26:27.478452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.621 [2024-11-06 13:26:27.478465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.621 [2024-11-06 13:26:27.478472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.621 [2024-11-06 13:26:27.478479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.621 [2024-11-06 13:26:27.478492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.621 qpair failed and we were unable to recover it. 00:29:45.622 [2024-11-06 13:26:27.488396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.622 [2024-11-06 13:26:27.488456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.622 [2024-11-06 13:26:27.488481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.622 [2024-11-06 13:26:27.488490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.622 [2024-11-06 13:26:27.488497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.622 [2024-11-06 13:26:27.488517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.622 qpair failed and we were unable to recover it. 00:29:45.622 [2024-11-06 13:26:27.498415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.622 [2024-11-06 13:26:27.498469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.622 [2024-11-06 13:26:27.498494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.622 [2024-11-06 13:26:27.498507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.622 [2024-11-06 13:26:27.498514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.622 [2024-11-06 13:26:27.498534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.622 qpair failed and we were unable to recover it. 00:29:45.622 [2024-11-06 13:26:27.508408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.622 [2024-11-06 13:26:27.508466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.622 [2024-11-06 13:26:27.508491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.622 [2024-11-06 13:26:27.508499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.622 [2024-11-06 13:26:27.508506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.622 [2024-11-06 13:26:27.508525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.622 qpair failed and we were unable to recover it. 00:29:45.961 [2024-11-06 13:26:27.518510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.961 [2024-11-06 13:26:27.518567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.961 [2024-11-06 13:26:27.518584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.961 [2024-11-06 13:26:27.518591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.961 [2024-11-06 13:26:27.518598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.961 [2024-11-06 13:26:27.518614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.961 qpair failed and we were unable to recover it. 00:29:45.961 [2024-11-06 13:26:27.528508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.961 [2024-11-06 13:26:27.528558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.961 [2024-11-06 13:26:27.528572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.961 [2024-11-06 13:26:27.528579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.961 [2024-11-06 13:26:27.528585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.961 [2024-11-06 13:26:27.528600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.961 qpair failed and we were unable to recover it. 00:29:45.961 [2024-11-06 13:26:27.538519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.961 [2024-11-06 13:26:27.538568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.961 [2024-11-06 13:26:27.538581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.961 [2024-11-06 13:26:27.538588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.961 [2024-11-06 13:26:27.538595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.961 [2024-11-06 13:26:27.538613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.961 qpair failed and we were unable to recover it. 00:29:45.961 [2024-11-06 13:26:27.548537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.961 [2024-11-06 13:26:27.548584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.961 [2024-11-06 13:26:27.548597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.961 [2024-11-06 13:26:27.548605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.961 [2024-11-06 13:26:27.548611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.961 [2024-11-06 13:26:27.548625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.961 qpair failed and we were unable to recover it. 00:29:45.961 [2024-11-06 13:26:27.558497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.961 [2024-11-06 13:26:27.558589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.961 [2024-11-06 13:26:27.558602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.961 [2024-11-06 13:26:27.558609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.961 [2024-11-06 13:26:27.558616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.961 [2024-11-06 13:26:27.558636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.961 qpair failed and we were unable to recover it. 00:29:45.961 [2024-11-06 13:26:27.568640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.961 [2024-11-06 13:26:27.568688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.961 [2024-11-06 13:26:27.568701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.961 [2024-11-06 13:26:27.568708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.961 [2024-11-06 13:26:27.568714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.961 [2024-11-06 13:26:27.568728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.961 qpair failed and we were unable to recover it. 00:29:45.961 [2024-11-06 13:26:27.578498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.961 [2024-11-06 13:26:27.578548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.961 [2024-11-06 13:26:27.578561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.961 [2024-11-06 13:26:27.578567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.961 [2024-11-06 13:26:27.578574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.961 [2024-11-06 13:26:27.578588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.961 qpair failed and we were unable to recover it. 00:29:45.961 [2024-11-06 13:26:27.588660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.588703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.588716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.588723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.588729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.588743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.598724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.598782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.598795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.598802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.598809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.598823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.608719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.608803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.608816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.608822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.608829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.608843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.618728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.618780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.618793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.618800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.618806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.618821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.628762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.628818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.628834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.628841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.628849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.628863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.638819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.638871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.638883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.638890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.638897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.638911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.648830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.648877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.648889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.648896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.648903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.648916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.658848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.658898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.658910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.658917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.658924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.658938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.668882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.668931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.668944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.668951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.668960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.668975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.678834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.678892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.678904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.678911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.678917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.678931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.688930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.688978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.688990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.688997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.689004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.689017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.698933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.698986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.698999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.699005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.699012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.699025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.708854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.962 [2024-11-06 13:26:27.708898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.962 [2024-11-06 13:26:27.708913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.962 [2024-11-06 13:26:27.708920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.962 [2024-11-06 13:26:27.708926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.962 [2024-11-06 13:26:27.708942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.962 qpair failed and we were unable to recover it. 00:29:45.962 [2024-11-06 13:26:27.719095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.719150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.719164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.719171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.719177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.719191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.729048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.729105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.729119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.729126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.729132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.729146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.739043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.739090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.739103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.739110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.739116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.739130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.749102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.749196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.749209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.749216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.749222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.749236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.759156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.759211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.759227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.759234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.759240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.759254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.769117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.769169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.769183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.769189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.769196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.769210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.779024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.779073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.779086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.779093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.779100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.779113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.789238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.789285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.789299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.789306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.789312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.789326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.799136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.799192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.799205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.799211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.799221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.799235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.809259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.809308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.809321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.809328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.809334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.809348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.819264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.819324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.819337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.819344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.819350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.819364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.829294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.829342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.829355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.829361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.829368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.829381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.839295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.963 [2024-11-06 13:26:27.839351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.963 [2024-11-06 13:26:27.839363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.963 [2024-11-06 13:26:27.839370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.963 [2024-11-06 13:26:27.839376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.963 [2024-11-06 13:26:27.839390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.963 qpair failed and we were unable to recover it. 00:29:45.963 [2024-11-06 13:26:27.849338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.964 [2024-11-06 13:26:27.849393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.964 [2024-11-06 13:26:27.849407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.964 [2024-11-06 13:26:27.849415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.964 [2024-11-06 13:26:27.849422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:45.964 [2024-11-06 13:26:27.849437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.964 qpair failed and we were unable to recover it. 00:29:46.225 [2024-11-06 13:26:27.859384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.225 [2024-11-06 13:26:27.859436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.225 [2024-11-06 13:26:27.859449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.225 [2024-11-06 13:26:27.859456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.225 [2024-11-06 13:26:27.859463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.225 [2024-11-06 13:26:27.859476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.225 qpair failed and we were unable to recover it. 00:29:46.225 [2024-11-06 13:26:27.869379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.225 [2024-11-06 13:26:27.869434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.225 [2024-11-06 13:26:27.869458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.225 [2024-11-06 13:26:27.869467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.225 [2024-11-06 13:26:27.869474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.225 [2024-11-06 13:26:27.869493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.225 qpair failed and we were unable to recover it. 00:29:46.225 [2024-11-06 13:26:27.879456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.879519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.879543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.879551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.879558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.879578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.889462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.889553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.889578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.889586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.889593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.889613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.899493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.899541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.899556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.899563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.899569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.899584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.909507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.909560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.909573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.909581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.909588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.909602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.919464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.919518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.919532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.919538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.919544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.919559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.929663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.929749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.929762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.929773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.929780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.929794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.939634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.939684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.939698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.939705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.939712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.939726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.949623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.949671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.949684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.949691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.949697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.949712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.959696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.959775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.959789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.959797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.959803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.959819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.969707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.969762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.969776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.969783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.969789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.969808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.979574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.979621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.979633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.979640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.979646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.979661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.989725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.989783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.989797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.989803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.989810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.989824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:27.999802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:27.999858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.226 [2024-11-06 13:26:27.999871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.226 [2024-11-06 13:26:27.999878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.226 [2024-11-06 13:26:27.999884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.226 [2024-11-06 13:26:27.999898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.226 qpair failed and we were unable to recover it. 00:29:46.226 [2024-11-06 13:26:28.009773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.226 [2024-11-06 13:26:28.009819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.009832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.009839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.009846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.009859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.019678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.019734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.019751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.019758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.019765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.019779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.029854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.029900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.029913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.029920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.029926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.029940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.039901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.039957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.039969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.039977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.039983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.039997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.049760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.049856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.049869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.049876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.049882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.049897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.059917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.059966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.059979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.059989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.059996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.060010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.069937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.069983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.069997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.070004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.070010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.070024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.079892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.079961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.079975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.079982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.079989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.080003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.090011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.090094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.090108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.090115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.090121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.090135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.099950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.100000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.100012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.100019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.100025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.100042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.110049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.110098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.110111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.110117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.110124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.110138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.227 [2024-11-06 13:26:28.120061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.227 [2024-11-06 13:26:28.120119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.227 [2024-11-06 13:26:28.120132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.227 [2024-11-06 13:26:28.120138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.227 [2024-11-06 13:26:28.120145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.227 [2024-11-06 13:26:28.120159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.227 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.130109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.130158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.130171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.130178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.130185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.130199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.140143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.140238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.140253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.140260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.140267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.140281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.150146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.150197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.150210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.150217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.150223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.150237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.160228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.160282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.160294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.160301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.160307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.160321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.170211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.170262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.170275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.170282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.170288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.170302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.180243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.180290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.180303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.180310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.180316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.180330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.190268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.190314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.190330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.190336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.190343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.190357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.200334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.200415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.200428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.200436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.200442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.200455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.210340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.210460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.210474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.210480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.210487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.210501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.220248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.490 [2024-11-06 13:26:28.220300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.490 [2024-11-06 13:26:28.220313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.490 [2024-11-06 13:26:28.220320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.490 [2024-11-06 13:26:28.220326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.490 [2024-11-06 13:26:28.220340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-11-06 13:26:28.230236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.230283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.230297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.230304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.230314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.230334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.240440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.240543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.240556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.240563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.240569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.240583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.250434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.250490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.250504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.250511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.250517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.250535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.260418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.260459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.260473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.260479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.260486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.260500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.270474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.270523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.270537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.270543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.270550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.270564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.280550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.280607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.280620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.280627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.280634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.280647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.290544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.290599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.290612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.290619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.290625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.290639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.300532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.300578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.300591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.300598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.300604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.300618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.310595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.310645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.310658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.310665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.310671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.310685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.320659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.320711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.320727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.320734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.320740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.320760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.330660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.330712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.330725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.330732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.330739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.330756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.340708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.340774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.340787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.340794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.340800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.340814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.350672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.350731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.491 [2024-11-06 13:26:28.350749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.491 [2024-11-06 13:26:28.350756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.491 [2024-11-06 13:26:28.350762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.491 [2024-11-06 13:26:28.350777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-11-06 13:26:28.360771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.491 [2024-11-06 13:26:28.360828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.492 [2024-11-06 13:26:28.360841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.492 [2024-11-06 13:26:28.360848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.492 [2024-11-06 13:26:28.360857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.492 [2024-11-06 13:26:28.360872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-11-06 13:26:28.370753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.492 [2024-11-06 13:26:28.370840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.492 [2024-11-06 13:26:28.370853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.492 [2024-11-06 13:26:28.370860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.492 [2024-11-06 13:26:28.370867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.492 [2024-11-06 13:26:28.370880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-11-06 13:26:28.380775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.492 [2024-11-06 13:26:28.380830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.492 [2024-11-06 13:26:28.380843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.492 [2024-11-06 13:26:28.380850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.492 [2024-11-06 13:26:28.380856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.492 [2024-11-06 13:26:28.380870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.754 [2024-11-06 13:26:28.390757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.754 [2024-11-06 13:26:28.390803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.754 [2024-11-06 13:26:28.390816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.754 [2024-11-06 13:26:28.390822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.754 [2024-11-06 13:26:28.390829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.754 [2024-11-06 13:26:28.390843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.754 qpair failed and we were unable to recover it. 00:29:46.754 [2024-11-06 13:26:28.400875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.754 [2024-11-06 13:26:28.400929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.754 [2024-11-06 13:26:28.400942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.754 [2024-11-06 13:26:28.400949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.754 [2024-11-06 13:26:28.400955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.754 [2024-11-06 13:26:28.400969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.754 qpair failed and we were unable to recover it. 00:29:46.754 [2024-11-06 13:26:28.410898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.754 [2024-11-06 13:26:28.410950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.754 [2024-11-06 13:26:28.410963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.754 [2024-11-06 13:26:28.410970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.754 [2024-11-06 13:26:28.410976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.754 [2024-11-06 13:26:28.410990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.754 qpair failed and we were unable to recover it. 00:29:46.754 [2024-11-06 13:26:28.420755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.754 [2024-11-06 13:26:28.420805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.754 [2024-11-06 13:26:28.420818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.754 [2024-11-06 13:26:28.420825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.754 [2024-11-06 13:26:28.420831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.754 [2024-11-06 13:26:28.420845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.754 qpair failed and we were unable to recover it. 00:29:46.754 [2024-11-06 13:26:28.430895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.754 [2024-11-06 13:26:28.430948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.754 [2024-11-06 13:26:28.430961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.754 [2024-11-06 13:26:28.430968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.754 [2024-11-06 13:26:28.430974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.754 [2024-11-06 13:26:28.430988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.754 qpair failed and we were unable to recover it. 00:29:46.754 [2024-11-06 13:26:28.440858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.754 [2024-11-06 13:26:28.440910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.754 [2024-11-06 13:26:28.440924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.754 [2024-11-06 13:26:28.440930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.754 [2024-11-06 13:26:28.440937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.754 [2024-11-06 13:26:28.440952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.754 qpair failed and we were unable to recover it. 00:29:46.754 [2024-11-06 13:26:28.450863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.754 [2024-11-06 13:26:28.450924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.754 [2024-11-06 13:26:28.450938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.754 [2024-11-06 13:26:28.450945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.754 [2024-11-06 13:26:28.450952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.754 [2024-11-06 13:26:28.450966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.754 qpair failed and we were unable to recover it. 00:29:46.754 [2024-11-06 13:26:28.461026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.754 [2024-11-06 13:26:28.461078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.754 [2024-11-06 13:26:28.461091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.754 [2024-11-06 13:26:28.461098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.754 [2024-11-06 13:26:28.461104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.754 [2024-11-06 13:26:28.461118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.754 qpair failed and we were unable to recover it. 00:29:46.754 [2024-11-06 13:26:28.471041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.754 [2024-11-06 13:26:28.471091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.754 [2024-11-06 13:26:28.471103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.471110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.471116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.471130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.481102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.481194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.481207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.481214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.481220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.481234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.491081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.491133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.491146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.491156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.491162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.491176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.501109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.501155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.501168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.501175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.501181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.501195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.511104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.511153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.511166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.511173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.511179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.511193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.521191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.521243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.521256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.521263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.521269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.521283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.531194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.531239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.531252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.531259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.531265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.531282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.541199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.541284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.541297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.541304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.541310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.541323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.551240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.551289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.551302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.551309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.551315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.551329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.561293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.561384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.561398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.561405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.561411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.561426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.571301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.571363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.571376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.571383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.571390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.571404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.581350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.581401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.581414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.581420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.581427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.581441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.591338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.591383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.591395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.591402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.755 [2024-11-06 13:26:28.591408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.755 [2024-11-06 13:26:28.591422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.755 qpair failed and we were unable to recover it. 00:29:46.755 [2024-11-06 13:26:28.601287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.755 [2024-11-06 13:26:28.601343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.755 [2024-11-06 13:26:28.601356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.755 [2024-11-06 13:26:28.601363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.756 [2024-11-06 13:26:28.601369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.756 [2024-11-06 13:26:28.601383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.756 qpair failed and we were unable to recover it. 00:29:46.756 [2024-11-06 13:26:28.611400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.756 [2024-11-06 13:26:28.611457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.756 [2024-11-06 13:26:28.611470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.756 [2024-11-06 13:26:28.611477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.756 [2024-11-06 13:26:28.611483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.756 [2024-11-06 13:26:28.611497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.756 qpair failed and we were unable to recover it. 00:29:46.756 [2024-11-06 13:26:28.621413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.756 [2024-11-06 13:26:28.621465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.756 [2024-11-06 13:26:28.621493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.756 [2024-11-06 13:26:28.621501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.756 [2024-11-06 13:26:28.621508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.756 [2024-11-06 13:26:28.621528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.756 qpair failed and we were unable to recover it. 00:29:46.756 [2024-11-06 13:26:28.631437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.756 [2024-11-06 13:26:28.631493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.756 [2024-11-06 13:26:28.631517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.756 [2024-11-06 13:26:28.631525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.756 [2024-11-06 13:26:28.631532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.756 [2024-11-06 13:26:28.631552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.756 qpair failed and we were unable to recover it. 00:29:46.756 [2024-11-06 13:26:28.641513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.756 [2024-11-06 13:26:28.641573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.756 [2024-11-06 13:26:28.641597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.756 [2024-11-06 13:26:28.641605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.756 [2024-11-06 13:26:28.641612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.756 [2024-11-06 13:26:28.641632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.756 qpair failed and we were unable to recover it. 00:29:46.756 [2024-11-06 13:26:28.651371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.756 [2024-11-06 13:26:28.651420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.756 [2024-11-06 13:26:28.651434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.756 [2024-11-06 13:26:28.651441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.756 [2024-11-06 13:26:28.651448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:46.756 [2024-11-06 13:26:28.651463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.756 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.661532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.661580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.661593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.661601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.661607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.661626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.671404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.671456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.671469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.671476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.671483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.671497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.681624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.681676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.681689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.681696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.681702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.681716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.691613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.691662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.691675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.691682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.691688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.691702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.701605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.701678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.701691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.701698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.701704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.701718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.711644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.711719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.711732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.711739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.711749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.711764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.721715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.721775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.721787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.721794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.721801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.721814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.731675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.731719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.731732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.731738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.731749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.731763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.741702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.741749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.741762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.741768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.741775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.741788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.751780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.751858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.751874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.018 [2024-11-06 13:26:28.751881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.018 [2024-11-06 13:26:28.751888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.018 [2024-11-06 13:26:28.751901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.018 qpair failed and we were unable to recover it. 00:29:47.018 [2024-11-06 13:26:28.761835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.018 [2024-11-06 13:26:28.761888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.018 [2024-11-06 13:26:28.761900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.761907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.761914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.761927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.771692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.771761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.771775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.771783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.771789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.771804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.781833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.781879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.781893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.781899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.781906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.781920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.791839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.791900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.791914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.791921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.791932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.791948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.801907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.801963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.801977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.801984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.801990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.802004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.811928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.812022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.812035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.812042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.812048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.812062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.821974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.822023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.822036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.822043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.822049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.822063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.831985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.832036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.832049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.832056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.832062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.832076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.842054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.842106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.842120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.842127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.842133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.842147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.852047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.852097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.852110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.852117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.852123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.852137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.862077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.862122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.862135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.862142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.862148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.862162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.872071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.872119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.872132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.872139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.872146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.872159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.882162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.882216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.882236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.882243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.882249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.019 [2024-11-06 13:26:28.882264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.019 qpair failed and we were unable to recover it. 00:29:47.019 [2024-11-06 13:26:28.892156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.019 [2024-11-06 13:26:28.892203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.019 [2024-11-06 13:26:28.892216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.019 [2024-11-06 13:26:28.892223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.019 [2024-11-06 13:26:28.892229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.020 [2024-11-06 13:26:28.892242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.020 qpair failed and we were unable to recover it. 00:29:47.020 [2024-11-06 13:26:28.902143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.020 [2024-11-06 13:26:28.902190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.020 [2024-11-06 13:26:28.902203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.020 [2024-11-06 13:26:28.902210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.020 [2024-11-06 13:26:28.902216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.020 [2024-11-06 13:26:28.902230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.020 qpair failed and we were unable to recover it. 00:29:47.020 [2024-11-06 13:26:28.912200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.020 [2024-11-06 13:26:28.912249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.020 [2024-11-06 13:26:28.912262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.020 [2024-11-06 13:26:28.912269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.020 [2024-11-06 13:26:28.912275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.020 [2024-11-06 13:26:28.912289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.020 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:28.922273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:28.922325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:28.922337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:28.922348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:28.922354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:28.922368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:28.932245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:28.932297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:28.932310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:28.932317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:28.932323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:28.932337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:28.942269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:28.942317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:28.942330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:28.942337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:28.942343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:28.942357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:28.952314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:28.952361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:28.952374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:28.952381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:28.952387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:28.952401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:28.962404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:28.962457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:28.962470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:28.962477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:28.962483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:28.962498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:28.972386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:28.972441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:28.972466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:28.972474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:28.972481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:28.972501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:28.982401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:28.982452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:28.982476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:28.982485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:28.982492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:28.982511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:28.992432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:28.992482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:28.992506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:28.992515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:28.992522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:28.992541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:29.002492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:29.002576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:29.002591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:29.002598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:29.002604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:29.002620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:29.012407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:29.012500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:29.012514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:29.012521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:29.012527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:29.012542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:29.022500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.282 [2024-11-06 13:26:29.022600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.282 [2024-11-06 13:26:29.022613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.282 [2024-11-06 13:26:29.022620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.282 [2024-11-06 13:26:29.022626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.282 [2024-11-06 13:26:29.022642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.282 qpair failed and we were unable to recover it. 00:29:47.282 [2024-11-06 13:26:29.032398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.032447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.032462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.032469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.032476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.032491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.042466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.042535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.042548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.042555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.042562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.042576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.052611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.052661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.052673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.052685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.052691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.052706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.062605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.062677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.062690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.062697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.062703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.062717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.072655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.072709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.072722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.072728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.072735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.072754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.082614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.082667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.082679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.082686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.082692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.082707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.092679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.092724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.092737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.092747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.092754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.092772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.102759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.102803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.102816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.102822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.102829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.102842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.112736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.112787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.112800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.112807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.112814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.112828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.122807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.122864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.122877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.122884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.122890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.122904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.132805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.132855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.132868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.132875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.132881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.132896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.142781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.142830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.142843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.142851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.142857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.142871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.152843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.152886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.283 [2024-11-06 13:26:29.152899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.283 [2024-11-06 13:26:29.152905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.283 [2024-11-06 13:26:29.152912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.283 [2024-11-06 13:26:29.152926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.283 qpair failed and we were unable to recover it. 00:29:47.283 [2024-11-06 13:26:29.162917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.283 [2024-11-06 13:26:29.162970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.284 [2024-11-06 13:26:29.162982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.284 [2024-11-06 13:26:29.162989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.284 [2024-11-06 13:26:29.162995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.284 [2024-11-06 13:26:29.163009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.284 qpair failed and we were unable to recover it. 00:29:47.284 [2024-11-06 13:26:29.172915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.284 [2024-11-06 13:26:29.172965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.284 [2024-11-06 13:26:29.172978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.284 [2024-11-06 13:26:29.172985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.284 [2024-11-06 13:26:29.172992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.284 [2024-11-06 13:26:29.173005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.284 qpair failed and we were unable to recover it. 00:29:47.546 [2024-11-06 13:26:29.182954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.546 [2024-11-06 13:26:29.183022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.546 [2024-11-06 13:26:29.183041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.546 [2024-11-06 13:26:29.183048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.546 [2024-11-06 13:26:29.183054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.546 [2024-11-06 13:26:29.183068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-11-06 13:26:29.192985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.546 [2024-11-06 13:26:29.193100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.546 [2024-11-06 13:26:29.193114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.546 [2024-11-06 13:26:29.193121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.546 [2024-11-06 13:26:29.193127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.546 [2024-11-06 13:26:29.193141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-11-06 13:26:29.203041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.546 [2024-11-06 13:26:29.203095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.546 [2024-11-06 13:26:29.203108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.546 [2024-11-06 13:26:29.203114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.546 [2024-11-06 13:26:29.203121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.546 [2024-11-06 13:26:29.203135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-11-06 13:26:29.213077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.546 [2024-11-06 13:26:29.213140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.546 [2024-11-06 13:26:29.213153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.546 [2024-11-06 13:26:29.213160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.546 [2024-11-06 13:26:29.213166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.546 [2024-11-06 13:26:29.213180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-11-06 13:26:29.223039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.546 [2024-11-06 13:26:29.223096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.546 [2024-11-06 13:26:29.223109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.546 [2024-11-06 13:26:29.223116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.546 [2024-11-06 13:26:29.223122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.546 [2024-11-06 13:26:29.223140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-11-06 13:26:29.233060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.546 [2024-11-06 13:26:29.233109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.546 [2024-11-06 13:26:29.233122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.546 [2024-11-06 13:26:29.233130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.546 [2024-11-06 13:26:29.233136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.546 [2024-11-06 13:26:29.233151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-11-06 13:26:29.243137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.546 [2024-11-06 13:26:29.243191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.546 [2024-11-06 13:26:29.243204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.546 [2024-11-06 13:26:29.243211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.546 [2024-11-06 13:26:29.243218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.546 [2024-11-06 13:26:29.243231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-11-06 13:26:29.253127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.546 [2024-11-06 13:26:29.253183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.546 [2024-11-06 13:26:29.253195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.546 [2024-11-06 13:26:29.253202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.546 [2024-11-06 13:26:29.253209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.546 [2024-11-06 13:26:29.253222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.546 qpair failed and we were unable to recover it. 00:29:47.546 [2024-11-06 13:26:29.263159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.546 [2024-11-06 13:26:29.263209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.263222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.263228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.263235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.263249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.273171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.273218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.273231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.273238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.273244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.273258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.283232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.283298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.283311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.283318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.283325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.283338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.293252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.293305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.293318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.293325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.293331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.293345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.303230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.303275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.303288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.303295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.303301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.303314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.313303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.313354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.313370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.313377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.313383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.313397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.323355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.323414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.323427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.323434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.323440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.323453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.333349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.333404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.333417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.333424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.333430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.333444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.343374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.343423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.343436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.343443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.343450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.343463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.353377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.353429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.353453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.353461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.353472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.353492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.363456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.363514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.363538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.363546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.363553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.363572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.373417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.373471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.373487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.373494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.373501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.373516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.383460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.383510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.547 [2024-11-06 13:26:29.383524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.547 [2024-11-06 13:26:29.383531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.547 [2024-11-06 13:26:29.383538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.547 [2024-11-06 13:26:29.383553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.547 qpair failed and we were unable to recover it. 00:29:47.547 [2024-11-06 13:26:29.393355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.547 [2024-11-06 13:26:29.393404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.548 [2024-11-06 13:26:29.393420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.548 [2024-11-06 13:26:29.393427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.548 [2024-11-06 13:26:29.393433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.548 [2024-11-06 13:26:29.393449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-11-06 13:26:29.403565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.548 [2024-11-06 13:26:29.403620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.548 [2024-11-06 13:26:29.403636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.548 [2024-11-06 13:26:29.403643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.548 [2024-11-06 13:26:29.403650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.548 [2024-11-06 13:26:29.403665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-11-06 13:26:29.413438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.548 [2024-11-06 13:26:29.413489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.548 [2024-11-06 13:26:29.413502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.548 [2024-11-06 13:26:29.413509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.548 [2024-11-06 13:26:29.413515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.548 [2024-11-06 13:26:29.413529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-11-06 13:26:29.423589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.548 [2024-11-06 13:26:29.423639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.548 [2024-11-06 13:26:29.423652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.548 [2024-11-06 13:26:29.423658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.548 [2024-11-06 13:26:29.423665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.548 [2024-11-06 13:26:29.423679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-11-06 13:26:29.433592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.548 [2024-11-06 13:26:29.433643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.548 [2024-11-06 13:26:29.433655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.548 [2024-11-06 13:26:29.433662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.548 [2024-11-06 13:26:29.433669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.548 [2024-11-06 13:26:29.433682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.548 [2024-11-06 13:26:29.443651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.548 [2024-11-06 13:26:29.443705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.548 [2024-11-06 13:26:29.443721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.548 [2024-11-06 13:26:29.443728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.548 [2024-11-06 13:26:29.443734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.548 [2024-11-06 13:26:29.443751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.548 qpair failed and we were unable to recover it. 00:29:47.810 [2024-11-06 13:26:29.453684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.810 [2024-11-06 13:26:29.453737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.810 [2024-11-06 13:26:29.453753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.810 [2024-11-06 13:26:29.453760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.810 [2024-11-06 13:26:29.453766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.810 [2024-11-06 13:26:29.453781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.810 qpair failed and we were unable to recover it. 00:29:47.810 [2024-11-06 13:26:29.463671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.810 [2024-11-06 13:26:29.463718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.810 [2024-11-06 13:26:29.463730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.810 [2024-11-06 13:26:29.463737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.810 [2024-11-06 13:26:29.463744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.810 [2024-11-06 13:26:29.463761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.810 qpair failed and we were unable to recover it. 00:29:47.810 [2024-11-06 13:26:29.473706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.810 [2024-11-06 13:26:29.473751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.810 [2024-11-06 13:26:29.473765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.810 [2024-11-06 13:26:29.473772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.810 [2024-11-06 13:26:29.473778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.810 [2024-11-06 13:26:29.473792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.810 qpair failed and we were unable to recover it. 00:29:47.810 [2024-11-06 13:26:29.483779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.483840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.483853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.483863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.483869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.483883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.493823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.493892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.493905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.493912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.493918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.493932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.503795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.503880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.503892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.503899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.503905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.503919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.513814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.513869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.513882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.513889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.513896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.513910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.523892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.523947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.523961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.523968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.523975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.523996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.533868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.533933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.533946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.533953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.533959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.533973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.543888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.543936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.543949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.543956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.543962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.543976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.553932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.554023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.554035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.554042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.554049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.554063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.564012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.564078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.564091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.564098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.564104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.564117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.573996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.574048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.574061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.574067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.574074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.574087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.583977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.584020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.584033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.584040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.584046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.584060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.594028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.594074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.594087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.594093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.594099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.594113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.604151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.604261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.811 [2024-11-06 13:26:29.604276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.811 [2024-11-06 13:26:29.604283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.811 [2024-11-06 13:26:29.604289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.811 [2024-11-06 13:26:29.604304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.811 qpair failed and we were unable to recover it. 00:29:47.811 [2024-11-06 13:26:29.614098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.811 [2024-11-06 13:26:29.614154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.614167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.614177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.614183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.614197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:47.812 [2024-11-06 13:26:29.624114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.812 [2024-11-06 13:26:29.624161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.624174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.624181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.624187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.624201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:47.812 [2024-11-06 13:26:29.634027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.812 [2024-11-06 13:26:29.634092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.634105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.634112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.634119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.634132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:47.812 [2024-11-06 13:26:29.644203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.812 [2024-11-06 13:26:29.644274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.644286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.644293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.644300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.644313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:47.812 [2024-11-06 13:26:29.654196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.812 [2024-11-06 13:26:29.654243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.654255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.654262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.654269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.654290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:47.812 [2024-11-06 13:26:29.664215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.812 [2024-11-06 13:26:29.664265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.664277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.664284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.664291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.664305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:47.812 [2024-11-06 13:26:29.674259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.812 [2024-11-06 13:26:29.674352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.674364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.674371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.674377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.674391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:47.812 [2024-11-06 13:26:29.684276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.812 [2024-11-06 13:26:29.684328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.684341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.684348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.684354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.684368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:47.812 [2024-11-06 13:26:29.694310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.812 [2024-11-06 13:26:29.694365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.694378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.694384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.694391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.694404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:47.812 [2024-11-06 13:26:29.704284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.812 [2024-11-06 13:26:29.704333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.812 [2024-11-06 13:26:29.704346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.812 [2024-11-06 13:26:29.704353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.812 [2024-11-06 13:26:29.704359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:47.812 [2024-11-06 13:26:29.704373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.812 qpair failed and we were unable to recover it. 00:29:48.075 [2024-11-06 13:26:29.714378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.075 [2024-11-06 13:26:29.714438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.075 [2024-11-06 13:26:29.714451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.075 [2024-11-06 13:26:29.714458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.075 [2024-11-06 13:26:29.714464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.075 [2024-11-06 13:26:29.714478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.075 qpair failed and we were unable to recover it. 00:29:48.075 [2024-11-06 13:26:29.724401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.075 [2024-11-06 13:26:29.724454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.075 [2024-11-06 13:26:29.724467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.075 [2024-11-06 13:26:29.724474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.075 [2024-11-06 13:26:29.724480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.075 [2024-11-06 13:26:29.724494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.075 qpair failed and we were unable to recover it. 00:29:48.075 [2024-11-06 13:26:29.734400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.075 [2024-11-06 13:26:29.734451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.075 [2024-11-06 13:26:29.734465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.075 [2024-11-06 13:26:29.734472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.075 [2024-11-06 13:26:29.734478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.075 [2024-11-06 13:26:29.734492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.075 qpair failed and we were unable to recover it. 00:29:48.075 [2024-11-06 13:26:29.744397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.744460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.744488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.744497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.744504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.744524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.754441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.754490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.754505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.754512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.754518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.754534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.764513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.764574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.764598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.764607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.764614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.764633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.774510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.774565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.774589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.774598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.774605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.774624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.784386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.784433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.784448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.784455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.784467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.784483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.794563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.794614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.794627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.794634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.794641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.794655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.804488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.804544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.804559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.804566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.804572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.804587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.814585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.814637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.814650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.814657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.814664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.814677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.824490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.824539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.824551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.824558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.824565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.824579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.834691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.834773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.834787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.834794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.834800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.834815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.844601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.844659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.844675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.844682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.844689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.844707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.854611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.854670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.854685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.854692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.854699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.854714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.864726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.864780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.076 [2024-11-06 13:26:29.864793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.076 [2024-11-06 13:26:29.864800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.076 [2024-11-06 13:26:29.864807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.076 [2024-11-06 13:26:29.864821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.076 qpair failed and we were unable to recover it. 00:29:48.076 [2024-11-06 13:26:29.874723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.076 [2024-11-06 13:26:29.874774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.874791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.874799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.874805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.874820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.077 [2024-11-06 13:26:29.884835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.077 [2024-11-06 13:26:29.884929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.884942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.884949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.884955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.884970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.077 [2024-11-06 13:26:29.894812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.077 [2024-11-06 13:26:29.894866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.894879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.894886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.894892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.894907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.077 [2024-11-06 13:26:29.904800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.077 [2024-11-06 13:26:29.904848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.904861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.904868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.904875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.904889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.077 [2024-11-06 13:26:29.914844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.077 [2024-11-06 13:26:29.914892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.914905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.914913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.914923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.914937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.077 [2024-11-06 13:26:29.924935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.077 [2024-11-06 13:26:29.924987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.925000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.925007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.925014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.925028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.077 [2024-11-06 13:26:29.934947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.077 [2024-11-06 13:26:29.934999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.935012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.935019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.935026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.935040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.077 [2024-11-06 13:26:29.944821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.077 [2024-11-06 13:26:29.944874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.944887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.944894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.944901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.944915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.077 [2024-11-06 13:26:29.954996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.077 [2024-11-06 13:26:29.955040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.955053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.955060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.955067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.955081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.077 [2024-11-06 13:26:29.964919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.077 [2024-11-06 13:26:29.964973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.077 [2024-11-06 13:26:29.964986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.077 [2024-11-06 13:26:29.964993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.077 [2024-11-06 13:26:29.965000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.077 [2024-11-06 13:26:29.965014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.077 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:29.975027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:29.975106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:29.975120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:29.975127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:29.975133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:29.975148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:29.984948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:29.984999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:29.985012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:29.985020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:29.985026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:29.985041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:29.995093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:29.995142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:29.995155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:29.995162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:29.995169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:29.995183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.005176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:30.005229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:30.005246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:30.005254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:30.005260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:30.005275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.015168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:30.015284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:30.015299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:30.015307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:30.015314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:30.015330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.025168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:30.025214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:30.025227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:30.025235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:30.025242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:30.025256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.035208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:30.035256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:30.035269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:30.035276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:30.035283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:30.035297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.045270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:30.045322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:30.045335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:30.045346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:30.045352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:30.045367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.055139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:30.055189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:30.055202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:30.055209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:30.055216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:30.055231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.065260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:30.065314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:30.065327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:30.065335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:30.065342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:30.065357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.075302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:30.075354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:30.075368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:30.075375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:30.075382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:30.075396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.085396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.339 [2024-11-06 13:26:30.085460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.339 [2024-11-06 13:26:30.085473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.339 [2024-11-06 13:26:30.085481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.339 [2024-11-06 13:26:30.085487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.339 [2024-11-06 13:26:30.085502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.339 qpair failed and we were unable to recover it. 00:29:48.339 [2024-11-06 13:26:30.095337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.095386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.095399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.095406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.095413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.095427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.105402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.105453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.105466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.105473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.105479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.105494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.115427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.115483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.115496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.115503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.115510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.115524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.125502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.125568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.125582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.125589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.125596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.125611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.135477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.135532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.135545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.135553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.135559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.135573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.145525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.145579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.145593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.145600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.145607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.145621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.155539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.155582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.155596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.155603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.155610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.155624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.165613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.165669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.165682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.165689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.165696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.165710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.175605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.175658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.175672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.175682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.175689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.175704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.185674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.185748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.185762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.185769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.185776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.185791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.195644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.195692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.195705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.195712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.195718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.195733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.205711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.205764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.205778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.205785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.205791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.205806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.215705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.215764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.340 [2024-11-06 13:26:30.215777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.340 [2024-11-06 13:26:30.215787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.340 [2024-11-06 13:26:30.215794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.340 [2024-11-06 13:26:30.215812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.340 qpair failed and we were unable to recover it. 00:29:48.340 [2024-11-06 13:26:30.225733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.340 [2024-11-06 13:26:30.225783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.341 [2024-11-06 13:26:30.225798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.341 [2024-11-06 13:26:30.225805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.341 [2024-11-06 13:26:30.225812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.341 [2024-11-06 13:26:30.225827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.341 qpair failed and we were unable to recover it. 00:29:48.341 [2024-11-06 13:26:30.235607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.341 [2024-11-06 13:26:30.235659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.341 [2024-11-06 13:26:30.235675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.341 [2024-11-06 13:26:30.235682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.341 [2024-11-06 13:26:30.235688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.341 [2024-11-06 13:26:30.235704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.341 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.245823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.245912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.245925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.245933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.245939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.245955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.255694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.255749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.255763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.255771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.255777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.255798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.265829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.265876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.265889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.265897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.265903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.265919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.275842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.275890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.275904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.275911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.275918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.275933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.285938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.286006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.286019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.286026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.286033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.286047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.295803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.295853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.295866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.295873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.295879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.295893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.305932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.305980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.305996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.306003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.306010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.306024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.315947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.315994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.316006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.316013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.316020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.316033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.326041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.326095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.326108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.326115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.326121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.326135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.336000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.336052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.336064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.336071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.336077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.336091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.346015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.346064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.346076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.346083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.346096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.346110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.356082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.356131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.356144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.356151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.356157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.356171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.366142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.366197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.366210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.366217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.366223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.366237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.376105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.376158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.376171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.376178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.376184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.376198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.386150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.386233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.386246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.386253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.603 [2024-11-06 13:26:30.386259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.603 [2024-11-06 13:26:30.386273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.603 qpair failed and we were unable to recover it. 00:29:48.603 [2024-11-06 13:26:30.396182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.603 [2024-11-06 13:26:30.396248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.603 [2024-11-06 13:26:30.396261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.603 [2024-11-06 13:26:30.396268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.396274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.396287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.406252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.406319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.406331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.406338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.406345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.406358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.416252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.416301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.416314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.416321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.416327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.416342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.426116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.426162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.426174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.426181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.426188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.426202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.436282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.436337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.436353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.436361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.436367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.436381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.446261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.446313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.446326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.446333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.446339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.446353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.456351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.456402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.456414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.456421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.456428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.456442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.466365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.466411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.466423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.466430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.466437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.466450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.476372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.476422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.476435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.476442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.476451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.476465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.486428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.486482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.486495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.486502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.486508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.486522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.604 [2024-11-06 13:26:30.496455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.604 [2024-11-06 13:26:30.496502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.604 [2024-11-06 13:26:30.496515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.604 [2024-11-06 13:26:30.496522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.604 [2024-11-06 13:26:30.496528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.604 [2024-11-06 13:26:30.496542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.604 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.506468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.506518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.506531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.506538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.506545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.506558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.516489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.516533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.516546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.516553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.516559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.516573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.526563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.526620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.526645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.526653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.526660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.526680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.536583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.536642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.536666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.536675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.536682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.536702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.546639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.546702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.546716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.546723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.546730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.546749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.556608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.556656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.556669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.556675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.556682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.556696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.566717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.566793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.566810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.566818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.566824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.566839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.576670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.576723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.576737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.576744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.576755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.576769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.586545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.586587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.586600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.586606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.586613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.586627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.596713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.596760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.596773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.596780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.596786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.596800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-11-06 13:26:30.606648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.866 [2024-11-06 13:26:30.606710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.866 [2024-11-06 13:26:30.606723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.866 [2024-11-06 13:26:30.606733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.866 [2024-11-06 13:26:30.606739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.866 [2024-11-06 13:26:30.606757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.616783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.616835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.616848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.616855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.616861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.616875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.626793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.626838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.626850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.626857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.626863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.626878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.636821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.636870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.636884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.636891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.636898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.636912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.646772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.646831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.646844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.646851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.646857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.646871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.656876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.656928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.656941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.656947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.656954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.656967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.666866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.666917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.666929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.666936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.666942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.666956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.676925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.676975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.676988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.676994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.677001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.677015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.686999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.687053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.687066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.687073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.687079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.687093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.696998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.697049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.697062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.697069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.697075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.697089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.707018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.707064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.707077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.707083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.707089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.707103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.717034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.717082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.717096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.717103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.717109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.717128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.727111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.727175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.727188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.727195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.727201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.727215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.737087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.867 [2024-11-06 13:26:30.737138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.867 [2024-11-06 13:26:30.737151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.867 [2024-11-06 13:26:30.737162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.867 [2024-11-06 13:26:30.737169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.867 [2024-11-06 13:26:30.737182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-11-06 13:26:30.747089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.868 [2024-11-06 13:26:30.747136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.868 [2024-11-06 13:26:30.747150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.868 [2024-11-06 13:26:30.747157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.868 [2024-11-06 13:26:30.747163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.868 [2024-11-06 13:26:30.747177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-11-06 13:26:30.757130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:48.868 [2024-11-06 13:26:30.757181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:48.868 [2024-11-06 13:26:30.757194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:48.868 [2024-11-06 13:26:30.757201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:48.868 [2024-11-06 13:26:30.757208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:48.868 [2024-11-06 13:26:30.757221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:48.868 qpair failed and we were unable to recover it. 00:29:49.129 [2024-11-06 13:26:30.767192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.129 [2024-11-06 13:26:30.767246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.129 [2024-11-06 13:26:30.767259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.129 [2024-11-06 13:26:30.767266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.129 [2024-11-06 13:26:30.767272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.129 [2024-11-06 13:26:30.767286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.129 qpair failed and we were unable to recover it. 00:29:49.129 [2024-11-06 13:26:30.777205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.129 [2024-11-06 13:26:30.777262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.129 [2024-11-06 13:26:30.777274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.129 [2024-11-06 13:26:30.777281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.129 [2024-11-06 13:26:30.777288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.129 [2024-11-06 13:26:30.777306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.129 qpair failed and we were unable to recover it. 00:29:49.129 [2024-11-06 13:26:30.787088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.129 [2024-11-06 13:26:30.787136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.129 [2024-11-06 13:26:30.787149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.129 [2024-11-06 13:26:30.787155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.129 [2024-11-06 13:26:30.787162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.129 [2024-11-06 13:26:30.787175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.129 qpair failed and we were unable to recover it. 00:29:49.129 [2024-11-06 13:26:30.797250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.129 [2024-11-06 13:26:30.797310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.129 [2024-11-06 13:26:30.797323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.129 [2024-11-06 13:26:30.797329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.129 [2024-11-06 13:26:30.797336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.129 [2024-11-06 13:26:30.797349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.129 qpair failed and we were unable to recover it. 00:29:49.129 [2024-11-06 13:26:30.807292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.129 [2024-11-06 13:26:30.807348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.129 [2024-11-06 13:26:30.807361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.129 [2024-11-06 13:26:30.807367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.129 [2024-11-06 13:26:30.807374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.129 [2024-11-06 13:26:30.807388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.129 qpair failed and we were unable to recover it. 00:29:49.129 [2024-11-06 13:26:30.817339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.129 [2024-11-06 13:26:30.817436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.129 [2024-11-06 13:26:30.817448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.129 [2024-11-06 13:26:30.817455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.129 [2024-11-06 13:26:30.817462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.129 [2024-11-06 13:26:30.817476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.129 qpair failed and we were unable to recover it. 00:29:49.129 [2024-11-06 13:26:30.827330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.129 [2024-11-06 13:26:30.827374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.129 [2024-11-06 13:26:30.827388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.129 [2024-11-06 13:26:30.827394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.129 [2024-11-06 13:26:30.827401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.129 [2024-11-06 13:26:30.827414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.129 qpair failed and we were unable to recover it. 00:29:49.129 [2024-11-06 13:26:30.837205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.129 [2024-11-06 13:26:30.837255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.129 [2024-11-06 13:26:30.837267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.129 [2024-11-06 13:26:30.837274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.129 [2024-11-06 13:26:30.837280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.837294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.847396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.847450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.847463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.847470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.847476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.847491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.857416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.857462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.857476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.857482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.857489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.857503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.867411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.867460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.867476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.867483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.867489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.867503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.877455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.877495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.877508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.877515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.877521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.877535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.887499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.887565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.887578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.887585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.887591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.887605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.897527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.897582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.897595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.897602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.897608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.897621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.907518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.907567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.907580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.907587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.907597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.907611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.917563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.917609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.917622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.917628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.917635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.917648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.927503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.927554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.927567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.927574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.927580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.927594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.937486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.937535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.937548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.937554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.937561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb930000b90 00:29:49.130 [2024-11-06 13:26:30.937575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.947646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.947763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.947845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.947870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.947891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb93c000b90 00:29:49.130 [2024-11-06 13:26:30.947949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.957674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.957757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.130 [2024-11-06 13:26:30.957786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.130 [2024-11-06 13:26:30.957800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.130 [2024-11-06 13:26:30.957813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb93c000b90 00:29:49.130 [2024-11-06 13:26:30.957843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.130 qpair failed and we were unable to recover it. 00:29:49.130 [2024-11-06 13:26:30.967750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.130 [2024-11-06 13:26:30.967856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.131 [2024-11-06 13:26:30.967920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.131 [2024-11-06 13:26:30.967945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.131 [2024-11-06 13:26:30.967966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb934000b90 00:29:49.131 [2024-11-06 13:26:30.968021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.131 qpair failed and we were unable to recover it. 00:29:49.131 [2024-11-06 13:26:30.977750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.131 [2024-11-06 13:26:30.977829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.131 [2024-11-06 13:26:30.977862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.131 [2024-11-06 13:26:30.977879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.131 [2024-11-06 13:26:30.977895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb934000b90 00:29:49.131 [2024-11-06 13:26:30.977929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.131 qpair failed and we were unable to recover it. 00:29:49.131 [2024-11-06 13:26:30.978097] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:49.131 A controller has encountered a failure and is being reset. 00:29:49.131 [2024-11-06 13:26:30.978219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d5f30 (9): Bad file descriptor 00:29:49.131 Controller properly reset. 00:29:49.131 Initializing NVMe Controllers 00:29:49.131 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:49.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:49.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:49.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:49.131 Initialization complete. Launching workers. 00:29:49.131 Starting thread on core 1 00:29:49.131 Starting thread on core 2 00:29:49.131 Starting thread on core 3 00:29:49.131 Starting thread on core 0 00:29:49.131 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:49.131 00:29:49.131 real 0m11.393s 00:29:49.131 user 0m21.841s 00:29:49.131 sys 0m4.011s 00:29:49.131 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:49.131 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.131 ************************************ 00:29:49.131 END TEST nvmf_target_disconnect_tc2 00:29:49.131 ************************************ 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.391 rmmod nvme_tcp 00:29:49.391 rmmod nvme_fabrics 00:29:49.391 rmmod nvme_keyring 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1918154 ']' 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1918154 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1918154 ']' 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 1918154 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1918154 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1918154' 00:29:49.391 killing process with pid 1918154 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 1918154 00:29:49.391 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 1918154 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.651 13:26:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.566 13:26:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.566 00:29:51.566 real 0m21.962s 00:29:51.566 user 0m49.487s 00:29:51.566 sys 0m10.341s 00:29:51.566 13:26:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:51.566 13:26:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:51.566 ************************************ 00:29:51.566 END TEST nvmf_target_disconnect 00:29:51.566 ************************************ 00:29:51.827 13:26:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:51.827 00:29:51.827 real 6m35.611s 00:29:51.827 user 11m22.147s 00:29:51.827 sys 2m17.254s 00:29:51.827 13:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:51.827 13:26:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.827 ************************************ 00:29:51.827 END TEST nvmf_host 00:29:51.827 ************************************ 00:29:51.827 13:26:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:51.827 13:26:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:51.827 13:26:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:51.827 13:26:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:51.827 13:26:33 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:51.827 13:26:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:51.827 ************************************ 00:29:51.827 START TEST nvmf_target_core_interrupt_mode 00:29:51.827 ************************************ 00:29:51.827 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:51.827 * Looking for test storage... 00:29:51.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:51.827 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:51.827 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:51.827 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.090 --rc genhtml_branch_coverage=1 00:29:52.090 --rc genhtml_function_coverage=1 00:29:52.090 --rc genhtml_legend=1 00:29:52.090 --rc geninfo_all_blocks=1 00:29:52.090 --rc geninfo_unexecuted_blocks=1 00:29:52.090 00:29:52.090 ' 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.090 --rc genhtml_branch_coverage=1 00:29:52.090 --rc genhtml_function_coverage=1 00:29:52.090 --rc genhtml_legend=1 00:29:52.090 --rc geninfo_all_blocks=1 00:29:52.090 --rc geninfo_unexecuted_blocks=1 00:29:52.090 00:29:52.090 ' 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.090 --rc genhtml_branch_coverage=1 00:29:52.090 --rc genhtml_function_coverage=1 00:29:52.090 --rc genhtml_legend=1 00:29:52.090 --rc geninfo_all_blocks=1 00:29:52.090 --rc geninfo_unexecuted_blocks=1 00:29:52.090 00:29:52.090 ' 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.090 --rc genhtml_branch_coverage=1 00:29:52.090 --rc genhtml_function_coverage=1 00:29:52.090 --rc genhtml_legend=1 00:29:52.090 --rc geninfo_all_blocks=1 00:29:52.090 --rc geninfo_unexecuted_blocks=1 00:29:52.090 00:29:52.090 ' 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.090 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:52.091 ************************************ 00:29:52.091 START TEST nvmf_abort 00:29:52.091 ************************************ 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:52.091 * Looking for test storage... 00:29:52.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:52.091 13:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.353 --rc genhtml_branch_coverage=1 00:29:52.353 --rc genhtml_function_coverage=1 00:29:52.353 --rc genhtml_legend=1 00:29:52.353 --rc geninfo_all_blocks=1 00:29:52.353 --rc geninfo_unexecuted_blocks=1 00:29:52.353 00:29:52.353 ' 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.353 --rc genhtml_branch_coverage=1 00:29:52.353 --rc genhtml_function_coverage=1 00:29:52.353 --rc genhtml_legend=1 00:29:52.353 --rc geninfo_all_blocks=1 00:29:52.353 --rc geninfo_unexecuted_blocks=1 00:29:52.353 00:29:52.353 ' 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.353 --rc genhtml_branch_coverage=1 00:29:52.353 --rc genhtml_function_coverage=1 00:29:52.353 --rc genhtml_legend=1 00:29:52.353 --rc geninfo_all_blocks=1 00:29:52.353 --rc geninfo_unexecuted_blocks=1 00:29:52.353 00:29:52.353 ' 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.353 --rc genhtml_branch_coverage=1 00:29:52.353 --rc genhtml_function_coverage=1 00:29:52.353 --rc genhtml_legend=1 00:29:52.353 --rc geninfo_all_blocks=1 00:29:52.353 --rc geninfo_unexecuted_blocks=1 00:29:52.353 00:29:52.353 ' 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.353 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.354 13:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:00.499 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:00.499 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.499 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:00.500 Found net devices under 0000:31:00.0: cvl_0_0 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:00.500 Found net devices under 0000:31:00.1: cvl_0_1 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.728 ms 00:30:00.500 00:30:00.500 --- 10.0.0.2 ping statistics --- 00:30:00.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.500 rtt min/avg/max/mdev = 0.728/0.728/0.728/0.000 ms 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:30:00.500 00:30:00.500 --- 10.0.0.1 ping statistics --- 00:30:00.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.500 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1923776 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1923776 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1923776 ']' 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.500 [2024-11-06 13:26:41.762629] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:00.500 [2024-11-06 13:26:41.763752] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:30:00.500 [2024-11-06 13:26:41.763800] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.500 [2024-11-06 13:26:41.838947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:00.500 [2024-11-06 13:26:41.885232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.500 [2024-11-06 13:26:41.885277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.500 [2024-11-06 13:26:41.885284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.500 [2024-11-06 13:26:41.885289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.500 [2024-11-06 13:26:41.885294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.500 [2024-11-06 13:26:41.886967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.500 [2024-11-06 13:26:41.887128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.500 [2024-11-06 13:26:41.887130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.500 [2024-11-06 13:26:41.958204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:00.500 [2024-11-06 13:26:41.959086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:00.500 [2024-11-06 13:26:41.960343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:00.500 [2024-11-06 13:26:41.960390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.500 13:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.500 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.500 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:00.500 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.500 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.500 [2024-11-06 13:26:42.047988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.500 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.500 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:00.500 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.501 Malloc0 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.501 Delay0 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.501 [2024-11-06 13:26:42.147972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.501 13:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:00.501 [2024-11-06 13:26:42.294508] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:03.095 Initializing NVMe Controllers 00:30:03.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:03.095 controller IO queue size 128 less than required 00:30:03.095 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:03.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:03.095 Initialization complete. Launching workers. 00:30:03.095 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28086 00:30:03.095 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28147, failed to submit 66 00:30:03.095 success 28086, unsuccessful 61, failed 0 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.095 rmmod nvme_tcp 00:30:03.095 rmmod nvme_fabrics 00:30:03.095 rmmod nvme_keyring 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1923776 ']' 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1923776 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1923776 ']' 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1923776 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1923776 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1923776' 00:30:03.095 killing process with pid 1923776 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1923776 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1923776 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.095 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.096 13:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.067 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.067 00:30:05.067 real 0m12.954s 00:30:05.067 user 0m10.824s 00:30:05.067 sys 0m7.041s 00:30:05.067 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:05.067 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:05.067 ************************************ 00:30:05.067 END TEST nvmf_abort 00:30:05.067 ************************************ 00:30:05.067 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:05.067 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:05.067 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:05.067 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:05.067 ************************************ 00:30:05.067 START TEST nvmf_ns_hotplug_stress 00:30:05.067 ************************************ 00:30:05.067 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:05.329 * Looking for test storage... 00:30:05.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:05.329 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:05.329 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:05.329 13:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:05.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.329 --rc genhtml_branch_coverage=1 00:30:05.329 --rc genhtml_function_coverage=1 00:30:05.329 --rc genhtml_legend=1 00:30:05.329 --rc geninfo_all_blocks=1 00:30:05.329 --rc geninfo_unexecuted_blocks=1 00:30:05.329 00:30:05.329 ' 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:05.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.329 --rc genhtml_branch_coverage=1 00:30:05.329 --rc genhtml_function_coverage=1 00:30:05.329 --rc genhtml_legend=1 00:30:05.329 --rc geninfo_all_blocks=1 00:30:05.329 --rc geninfo_unexecuted_blocks=1 00:30:05.329 00:30:05.329 ' 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:05.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.329 --rc genhtml_branch_coverage=1 00:30:05.329 --rc genhtml_function_coverage=1 00:30:05.329 --rc genhtml_legend=1 00:30:05.329 --rc geninfo_all_blocks=1 00:30:05.329 --rc geninfo_unexecuted_blocks=1 00:30:05.329 00:30:05.329 ' 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:05.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.329 --rc genhtml_branch_coverage=1 00:30:05.329 --rc genhtml_function_coverage=1 00:30:05.329 --rc genhtml_legend=1 00:30:05.329 --rc geninfo_all_blocks=1 00:30:05.329 --rc geninfo_unexecuted_blocks=1 00:30:05.329 00:30:05.329 ' 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.329 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:05.330 13:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:13.470 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:13.470 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:13.470 Found net devices under 0000:31:00.0: cvl_0_0 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.470 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:13.471 Found net devices under 0000:31:00.1: cvl_0_1 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:13.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:30:13.471 00:30:13.471 --- 10.0.0.2 ping statistics --- 00:30:13.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.471 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:30:13.471 00:30:13.471 --- 10.0.0.1 ping statistics --- 00:30:13.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.471 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1928491 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1928491 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1928491 ']' 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:13.471 13:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:13.471 [2024-11-06 13:26:54.821071] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:13.471 [2024-11-06 13:26:54.822218] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:30:13.472 [2024-11-06 13:26:54.822268] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.472 [2024-11-06 13:26:54.922969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.472 [2024-11-06 13:26:54.974395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.472 [2024-11-06 13:26:54.974444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.472 [2024-11-06 13:26:54.974452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.472 [2024-11-06 13:26:54.974459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.472 [2024-11-06 13:26:54.974465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.472 [2024-11-06 13:26:54.976551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.472 [2024-11-06 13:26:54.976709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.472 [2024-11-06 13:26:54.976711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.472 [2024-11-06 13:26:55.053434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:13.472 [2024-11-06 13:26:55.054421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:13.472 [2024-11-06 13:26:55.054975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:13.472 [2024-11-06 13:26:55.055126] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:13.733 13:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:13.733 13:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:13.733 13:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:13.995 13:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:13.995 13:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:13.995 13:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.995 13:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:13.995 13:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:13.995 [2024-11-06 13:26:55.841579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.995 13:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:14.256 13:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.517 [2024-11-06 13:26:56.218226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.517 13:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:14.517 13:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:14.779 Malloc0 00:30:14.779 13:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:15.040 Delay0 00:30:15.040 13:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.301 13:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:15.301 NULL1 00:30:15.301 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:15.563 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1928958 00:30:15.563 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:15.563 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:15.563 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.824 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.085 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:16.086 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:16.086 true 00:30:16.086 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:16.086 13:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.346 13:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.607 13:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:16.607 13:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:16.868 true 00:30:16.868 13:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:16.868 13:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.808 Read completed with error (sct=0, sc=11) 00:30:17.808 13:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:18.068 13:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:18.068 13:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:18.328 true 00:30:18.328 13:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:18.328 13:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.587 13:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.587 13:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:18.587 13:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:18.846 true 00:30:18.846 13:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:18.846 13:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.228 13:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.229 13:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:20.229 13:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:20.488 true 00:30:20.488 13:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:20.488 13:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.428 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.428 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:21.428 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:21.689 true 00:30:21.689 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:21.689 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.689 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.949 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:21.949 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:22.209 true 00:30:22.209 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:22.209 13:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.469 13:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.469 13:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:22.469 13:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:22.730 true 00:30:22.730 13:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:22.730 13:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.990 13:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.990 13:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:22.990 13:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:23.250 true 00:30:23.250 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:23.250 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.510 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.510 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:23.510 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:23.770 true 00:30:23.770 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:23.770 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.031 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.031 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:24.031 13:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:24.291 true 00:30:24.291 13:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:24.291 13:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.675 13:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.675 13:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:25.675 13:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:25.936 true 00:30:25.936 13:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:25.936 13:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.876 13:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.876 13:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:26.876 13:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:27.136 true 00:30:27.136 13:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:27.136 13:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.136 13:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.395 13:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:27.395 13:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:27.656 true 00:30:27.656 13:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:27.656 13:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.043 13:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:29.043 13:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:29.043 13:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:29.043 true 00:30:29.043 13:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:29.043 13:27:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.985 13:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.245 13:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:30.245 13:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:30.245 true 00:30:30.245 13:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:30.245 13:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.506 13:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.766 13:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:30.766 13:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:30.766 true 00:30:30.766 13:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:30.766 13:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.149 13:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:32.149 13:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:32.149 13:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:32.408 true 00:30:32.408 13:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:32.408 13:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:33.348 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.609 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:33.609 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:33.609 true 00:30:33.609 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:33.609 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.870 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.128 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:34.128 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:34.128 true 00:30:34.128 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:34.128 13:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.506 13:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:35.506 13:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:35.506 13:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:35.765 true 00:30:35.765 13:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:35.765 13:27:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:36.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:36.703 13:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.703 13:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:36.703 13:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:36.962 true 00:30:36.962 13:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:36.962 13:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.222 13:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.222 13:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:37.222 13:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:37.481 true 00:30:37.481 13:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:37.482 13:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.860 13:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:38.860 13:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:38.860 13:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:39.119 true 00:30:39.119 13:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:39.119 13:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.059 13:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.059 13:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:40.059 13:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:40.320 true 00:30:40.320 13:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:40.320 13:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.320 13:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.579 13:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:40.579 13:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:40.840 true 00:30:40.840 13:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:40.840 13:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.779 13:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.040 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:42.040 13:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:42.040 13:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:42.300 true 00:30:42.300 13:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:42.300 13:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:43.246 13:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.246 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:43.246 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:43.506 true 00:30:43.506 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:43.506 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.506 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.765 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:43.765 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:44.025 true 00:30:44.025 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:44.025 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.025 13:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.284 13:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:44.284 13:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:44.544 true 00:30:44.544 13:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:44.544 13:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.805 13:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.805 13:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:44.805 13:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:45.066 true 00:30:45.066 13:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:45.066 13:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.448 Initializing NVMe Controllers 00:30:46.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.448 Controller IO queue size 128, less than required. 00:30:46.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:46.448 Controller IO queue size 128, less than required. 00:30:46.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:46.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:46.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:46.448 Initialization complete. Launching workers. 00:30:46.448 ======================================================== 00:30:46.448 Latency(us) 00:30:46.448 Device Information : IOPS MiB/s Average min max 00:30:46.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2132.16 1.04 36352.36 1448.81 1030651.78 00:30:46.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17852.48 8.72 7145.95 1142.40 401524.47 00:30:46.448 ======================================================== 00:30:46.448 Total : 19984.64 9.76 10261.98 1142.40 1030651.78 00:30:46.448 00:30:46.448 13:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.448 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:46.448 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:46.448 true 00:30:46.448 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928958 00:30:46.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1928958) - No such process 00:30:46.448 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1928958 00:30:46.448 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.707 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.968 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:46.968 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:46.968 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:46.968 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.968 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:46.968 null0 00:30:46.968 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.968 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.968 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:47.228 null1 00:30:47.228 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.228 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.228 13:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:47.488 null2 00:30:47.488 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.488 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.488 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:47.488 null3 00:30:47.488 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.488 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.488 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:47.748 null4 00:30:47.748 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.748 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.748 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:47.748 null5 00:30:47.748 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.748 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.748 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:48.008 null6 00:30:48.008 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:48.008 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:48.008 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:48.268 null7 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.268 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1935853 1935854 1935856 1935858 1935860 1935862 1935864 1935866 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.269 13:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.269 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.528 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.529 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.790 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.050 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.051 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.312 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.312 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.312 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.312 13:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.312 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.313 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.313 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.313 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.313 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.313 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.313 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.313 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.313 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.313 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.574 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.836 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:50.098 13:27:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:50.360 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.622 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.884 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:51.145 13:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:51.145 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.145 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.145 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:51.145 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.406 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:51.666 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.927 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.237 13:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.237 rmmod nvme_tcp 00:30:52.237 rmmod nvme_fabrics 00:30:52.237 rmmod nvme_keyring 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1928491 ']' 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1928491 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1928491 ']' 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1928491 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1928491 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1928491' 00:30:52.237 killing process with pid 1928491 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1928491 00:30:52.237 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1928491 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.534 13:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.475 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:54.475 00:30:54.475 real 0m49.415s 00:30:54.475 user 2m57.775s 00:30:54.475 sys 0m20.520s 00:30:54.475 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:54.475 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:54.475 ************************************ 00:30:54.475 END TEST nvmf_ns_hotplug_stress 00:30:54.475 ************************************ 00:30:54.475 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:54.475 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:54.475 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:54.475 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:54.736 ************************************ 00:30:54.736 START TEST nvmf_delete_subsystem 00:30:54.736 ************************************ 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:54.736 * Looking for test storage... 00:30:54.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.736 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:54.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.737 --rc genhtml_branch_coverage=1 00:30:54.737 --rc genhtml_function_coverage=1 00:30:54.737 --rc genhtml_legend=1 00:30:54.737 --rc geninfo_all_blocks=1 00:30:54.737 --rc geninfo_unexecuted_blocks=1 00:30:54.737 00:30:54.737 ' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:54.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.737 --rc genhtml_branch_coverage=1 00:30:54.737 --rc genhtml_function_coverage=1 00:30:54.737 --rc genhtml_legend=1 00:30:54.737 --rc geninfo_all_blocks=1 00:30:54.737 --rc geninfo_unexecuted_blocks=1 00:30:54.737 00:30:54.737 ' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:54.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.737 --rc genhtml_branch_coverage=1 00:30:54.737 --rc genhtml_function_coverage=1 00:30:54.737 --rc genhtml_legend=1 00:30:54.737 --rc geninfo_all_blocks=1 00:30:54.737 --rc geninfo_unexecuted_blocks=1 00:30:54.737 00:30:54.737 ' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:54.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.737 --rc genhtml_branch_coverage=1 00:30:54.737 --rc genhtml_function_coverage=1 00:30:54.737 --rc genhtml_legend=1 00:30:54.737 --rc geninfo_all_blocks=1 00:30:54.737 --rc geninfo_unexecuted_blocks=1 00:30:54.737 00:30:54.737 ' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.737 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.738 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.738 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.738 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.738 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.738 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.738 13:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:02.879 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:02.880 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:02.880 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:02.880 Found net devices under 0000:31:00.0: cvl_0_0 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:02.880 Found net devices under 0000:31:00.1: cvl_0_1 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.880 13:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:02.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:31:02.880 00:31:02.880 --- 10.0.0.2 ping statistics --- 00:31:02.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.880 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:31:02.880 00:31:02.880 --- 10.0.0.1 ping statistics --- 00:31:02.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.880 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1941048 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1941048 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1941048 ']' 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:02.880 13:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.881 [2024-11-06 13:27:44.251124] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:02.881 [2024-11-06 13:27:44.252267] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:31:02.881 [2024-11-06 13:27:44.252315] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.881 [2024-11-06 13:27:44.354096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:02.881 [2024-11-06 13:27:44.404896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.881 [2024-11-06 13:27:44.404948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.881 [2024-11-06 13:27:44.404957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.881 [2024-11-06 13:27:44.404964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.881 [2024-11-06 13:27:44.404972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.881 [2024-11-06 13:27:44.406578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.881 [2024-11-06 13:27:44.406582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.881 [2024-11-06 13:27:44.483160] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:02.881 [2024-11-06 13:27:44.483734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:02.881 [2024-11-06 13:27:44.484078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.452 [2024-11-06 13:27:45.135707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.452 [2024-11-06 13:27:45.168214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.452 NULL1 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.452 Delay0 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1941170 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:03.452 13:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:03.452 [2024-11-06 13:27:45.274945] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:05.365 13:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.365 13:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.365 13:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:05.626 Read completed with error (sct=0, sc=8) 00:31:05.626 starting I/O failed: -6 00:31:05.626 Read completed with error (sct=0, sc=8) 00:31:05.626 Write completed with error (sct=0, sc=8) 00:31:05.626 Read completed with error (sct=0, sc=8) 00:31:05.626 Read completed with error (sct=0, sc=8) 00:31:05.626 starting I/O failed: -6 00:31:05.626 Read completed with error (sct=0, sc=8) 00:31:05.626 Read completed with error (sct=0, sc=8) 00:31:05.626 Read completed with error (sct=0, sc=8) 00:31:05.626 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Write completed with error (sct=0, sc=8) 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.627 starting I/O failed: -6 00:31:05.627 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 [2024-11-06 13:27:47.354848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x938f00 is same with the state(6) to be set 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 starting I/O failed: -6 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 [2024-11-06 13:27:47.358915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f47e0000c40 is same with the state(6) to be set 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Read completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:05.628 Write completed with error (sct=0, sc=8) 00:31:06.682 [2024-11-06 13:27:48.330445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93a5e0 is same with the state(6) to be set 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 [2024-11-06 13:27:48.359537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9390e0 is same with the state(6) to be set 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 [2024-11-06 13:27:48.359682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9394a0 is same with the state(6) to be set 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 [2024-11-06 13:27:48.360405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f47e000d7e0 is same with the state(6) to be set 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Read completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 Write completed with error (sct=0, sc=8) 00:31:06.682 [2024-11-06 13:27:48.360925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f47e000d020 is same with the state(6) to be set 00:31:06.682 Initializing NVMe Controllers 00:31:06.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.682 Controller IO queue size 128, less than required. 00:31:06.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:06.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:06.682 Initialization complete. Launching workers. 00:31:06.682 ======================================================== 00:31:06.682 Latency(us) 00:31:06.682 Device Information : IOPS MiB/s Average min max 00:31:06.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.11 0.09 891697.19 486.57 1007931.79 00:31:06.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.79 0.07 936996.88 307.60 1011528.42 00:31:06.682 ======================================================== 00:31:06.682 Total : 343.90 0.17 911823.10 307.60 1011528.42 00:31:06.682 00:31:06.682 [2024-11-06 13:27:48.361452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93a5e0 (9): Bad file descriptor 00:31:06.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:06.682 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.682 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:06.682 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1941170 00:31:06.682 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1941170 00:31:07.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1941170) - No such process 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1941170 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1941170 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1941170 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:07.254 [2024-11-06 13:27:48.895958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1941865 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1941865 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:07.254 13:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:07.254 [2024-11-06 13:27:48.995662] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:07.825 13:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:07.825 13:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1941865 00:31:07.825 13:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:08.085 13:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:08.085 13:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1941865 00:31:08.085 13:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:08.654 13:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:08.654 13:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1941865 00:31:08.654 13:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:09.224 13:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:09.224 13:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1941865 00:31:09.224 13:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:09.792 13:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:09.793 13:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1941865 00:31:09.793 13:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:10.052 13:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:10.052 13:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1941865 00:31:10.052 13:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:10.312 Initializing NVMe Controllers 00:31:10.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.312 Controller IO queue size 128, less than required. 00:31:10.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:10.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:10.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:10.312 Initialization complete. Launching workers. 00:31:10.312 ======================================================== 00:31:10.312 Latency(us) 00:31:10.312 Device Information : IOPS MiB/s Average min max 00:31:10.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002614.60 1000266.73 1041600.02 00:31:10.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003995.61 1000373.79 1010604.13 00:31:10.312 ======================================================== 00:31:10.312 Total : 256.00 0.12 1003305.11 1000266.73 1041600.02 00:31:10.312 00:31:10.312 [2024-11-06 13:27:52.199933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bb070 is same with the state(6) to be set 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1941865 00:31:10.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1941865) - No such process 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1941865 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.572 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.572 rmmod nvme_tcp 00:31:10.833 rmmod nvme_fabrics 00:31:10.833 rmmod nvme_keyring 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1941048 ']' 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1941048 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1941048 ']' 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1941048 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1941048 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1941048' 00:31:10.833 killing process with pid 1941048 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1941048 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1941048 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.833 13:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.375 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:13.375 00:31:13.375 real 0m18.390s 00:31:13.375 user 0m26.383s 00:31:13.375 sys 0m7.572s 00:31:13.375 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:13.375 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:13.375 ************************************ 00:31:13.375 END TEST nvmf_delete_subsystem 00:31:13.375 ************************************ 00:31:13.375 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:13.375 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:13.375 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:13.375 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:13.375 ************************************ 00:31:13.376 START TEST nvmf_host_management 00:31:13.376 ************************************ 00:31:13.376 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:13.376 * Looking for test storage... 00:31:13.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:13.376 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:13.376 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:13.376 13:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:13.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.376 --rc genhtml_branch_coverage=1 00:31:13.376 --rc genhtml_function_coverage=1 00:31:13.376 --rc genhtml_legend=1 00:31:13.376 --rc geninfo_all_blocks=1 00:31:13.376 --rc geninfo_unexecuted_blocks=1 00:31:13.376 00:31:13.376 ' 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:13.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.376 --rc genhtml_branch_coverage=1 00:31:13.376 --rc genhtml_function_coverage=1 00:31:13.376 --rc genhtml_legend=1 00:31:13.376 --rc geninfo_all_blocks=1 00:31:13.376 --rc geninfo_unexecuted_blocks=1 00:31:13.376 00:31:13.376 ' 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:13.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.376 --rc genhtml_branch_coverage=1 00:31:13.376 --rc genhtml_function_coverage=1 00:31:13.376 --rc genhtml_legend=1 00:31:13.376 --rc geninfo_all_blocks=1 00:31:13.376 --rc geninfo_unexecuted_blocks=1 00:31:13.376 00:31:13.376 ' 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:13.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.376 --rc genhtml_branch_coverage=1 00:31:13.376 --rc genhtml_function_coverage=1 00:31:13.376 --rc genhtml_legend=1 00:31:13.376 --rc geninfo_all_blocks=1 00:31:13.376 --rc geninfo_unexecuted_blocks=1 00:31:13.376 00:31:13.376 ' 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:13.376 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.377 13:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:21.521 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:21.521 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:21.521 Found net devices under 0000:31:00.0: cvl_0_0 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:21.521 Found net devices under 0000:31:00.1: cvl_0_1 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.521 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:31:21.522 00:31:21.522 --- 10.0.0.2 ping statistics --- 00:31:21.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.522 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:31:21.522 00:31:21.522 --- 10.0.0.1 ping statistics --- 00:31:21.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.522 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1946784 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1946784 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1946784 ']' 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:21.522 13:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.522 [2024-11-06 13:28:02.762144] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:21.522 [2024-11-06 13:28:02.763284] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:31:21.522 [2024-11-06 13:28:02.763333] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.522 [2024-11-06 13:28:02.864681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:21.522 [2024-11-06 13:28:02.919333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.522 [2024-11-06 13:28:02.919382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.522 [2024-11-06 13:28:02.919390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.522 [2024-11-06 13:28:02.919397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.522 [2024-11-06 13:28:02.919404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.522 [2024-11-06 13:28:02.921416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.522 [2024-11-06 13:28:02.921577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.522 [2024-11-06 13:28:02.921736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.522 [2024-11-06 13:28:02.921737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:21.522 [2024-11-06 13:28:02.998979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:21.522 [2024-11-06 13:28:02.999784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:21.522 [2024-11-06 13:28:03.000243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:21.522 [2024-11-06 13:28:03.000670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:21.522 [2024-11-06 13:28:03.000753] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.784 [2024-11-06 13:28:03.630593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:21.784 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.046 Malloc0 00:31:22.046 [2024-11-06 13:28:03.734916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1947148 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1947148 /var/tmp/bdevperf.sock 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1947148 ']' 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:22.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.046 { 00:31:22.046 "params": { 00:31:22.046 "name": "Nvme$subsystem", 00:31:22.046 "trtype": "$TEST_TRANSPORT", 00:31:22.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.046 "adrfam": "ipv4", 00:31:22.046 "trsvcid": "$NVMF_PORT", 00:31:22.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.046 "hdgst": ${hdgst:-false}, 00:31:22.046 "ddgst": ${ddgst:-false} 00:31:22.046 }, 00:31:22.046 "method": "bdev_nvme_attach_controller" 00:31:22.046 } 00:31:22.046 EOF 00:31:22.046 )") 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:22.046 13:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:22.046 "params": { 00:31:22.046 "name": "Nvme0", 00:31:22.046 "trtype": "tcp", 00:31:22.046 "traddr": "10.0.0.2", 00:31:22.046 "adrfam": "ipv4", 00:31:22.046 "trsvcid": "4420", 00:31:22.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:22.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:22.046 "hdgst": false, 00:31:22.046 "ddgst": false 00:31:22.046 }, 00:31:22.046 "method": "bdev_nvme_attach_controller" 00:31:22.046 }' 00:31:22.046 [2024-11-06 13:28:03.844418] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:31:22.046 [2024-11-06 13:28:03.844493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1947148 ] 00:31:22.046 [2024-11-06 13:28:03.940514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.308 [2024-11-06 13:28:03.993905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.308 Running I/O for 10 seconds... 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.882 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.883 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:31:22.883 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:31:22.883 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:22.883 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:22.883 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:22.883 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:22.883 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.883 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.883 [2024-11-06 13:28:04.738782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.738846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.738868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.738877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.738888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.738895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.738924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.738932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.738942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.738950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.738960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.738968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.738978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.738986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.738996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.883 [2024-11-06 13:28:04.739440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.883 [2024-11-06 13:28:04.739449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.739987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.739998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.740006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.740017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.884 [2024-11-06 13:28:04.740025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.884 [2024-11-06 13:28:04.740034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378c60 is same with the state(6) to be set 00:31:22.884 [2024-11-06 13:28:04.741338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:22.884 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.884 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:22.884 task offset: 105600 on job bdev=Nvme0n1 fails 00:31:22.884 00:31:22.884 Latency(us) 00:31:22.884 [2024-11-06T12:28:04.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.884 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:22.884 Job: Nvme0n1 ended in about 0.58 seconds with error 00:31:22.884 Verification LBA range: start 0x0 length 0x400 00:31:22.884 Nvme0n1 : 0.58 1319.85 82.49 109.99 0.00 43732.17 1897.81 35607.89 00:31:22.884 [2024-11-06T12:28:04.786Z] =================================================================================================================== 00:31:22.884 [2024-11-06T12:28:04.786Z] Total : 1319.85 82.49 109.99 0.00 43732.17 1897.81 35607.89 00:31:22.884 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.884 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:22.884 [2024-11-06 13:28:04.743854] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:22.884 [2024-11-06 13:28:04.743918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1368280 (9): Bad file descriptor 00:31:22.884 [2024-11-06 13:28:04.745385] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:22.884 [2024-11-06 13:28:04.745502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:22.884 [2024-11-06 13:28:04.745533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.885 [2024-11-06 13:28:04.745553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:22.885 [2024-11-06 13:28:04.745564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:22.885 [2024-11-06 13:28:04.745574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.885 [2024-11-06 13:28:04.745582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1368280 00:31:22.885 [2024-11-06 13:28:04.745606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1368280 (9): Bad file descriptor 00:31:22.885 [2024-11-06 13:28:04.745622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:22.885 [2024-11-06 13:28:04.745632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:22.885 [2024-11-06 13:28:04.745642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:22.885 [2024-11-06 13:28:04.745653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:22.885 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.885 13:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:24.271 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1947148 00:31:24.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1947148) - No such process 00:31:24.271 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:24.271 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:24.271 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:24.271 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:24.271 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:24.271 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:24.271 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:24.271 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:24.271 { 00:31:24.271 "params": { 00:31:24.271 "name": "Nvme$subsystem", 00:31:24.271 "trtype": "$TEST_TRANSPORT", 00:31:24.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.271 "adrfam": "ipv4", 00:31:24.271 "trsvcid": "$NVMF_PORT", 00:31:24.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.271 "hdgst": ${hdgst:-false}, 00:31:24.271 "ddgst": ${ddgst:-false} 00:31:24.271 }, 00:31:24.272 "method": "bdev_nvme_attach_controller" 00:31:24.272 } 00:31:24.272 EOF 00:31:24.272 )") 00:31:24.272 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:24.272 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:24.272 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:24.272 13:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:24.272 "params": { 00:31:24.272 "name": "Nvme0", 00:31:24.272 "trtype": "tcp", 00:31:24.272 "traddr": "10.0.0.2", 00:31:24.272 "adrfam": "ipv4", 00:31:24.272 "trsvcid": "4420", 00:31:24.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:24.272 "hdgst": false, 00:31:24.272 "ddgst": false 00:31:24.272 }, 00:31:24.272 "method": "bdev_nvme_attach_controller" 00:31:24.272 }' 00:31:24.272 [2024-11-06 13:28:05.822283] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:31:24.272 [2024-11-06 13:28:05.822357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1947502 ] 00:31:24.272 [2024-11-06 13:28:05.914514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.272 [2024-11-06 13:28:05.952764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.272 Running I/O for 1 seconds... 00:31:25.656 1949.00 IOPS, 121.81 MiB/s 00:31:25.656 Latency(us) 00:31:25.656 [2024-11-06T12:28:07.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.656 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:25.656 Verification LBA range: start 0x0 length 0x400 00:31:25.656 Nvme0n1 : 1.06 1905.15 119.07 0.00 0.00 31637.33 4041.39 46749.01 00:31:25.656 [2024-11-06T12:28:07.558Z] =================================================================================================================== 00:31:25.656 [2024-11-06T12:28:07.558Z] Total : 1905.15 119.07 0.00 0.00 31637.33 4041.39 46749.01 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:25.656 rmmod nvme_tcp 00:31:25.656 rmmod nvme_fabrics 00:31:25.656 rmmod nvme_keyring 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1946784 ']' 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1946784 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1946784 ']' 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1946784 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1946784 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1946784' 00:31:25.656 killing process with pid 1946784 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1946784 00:31:25.656 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1946784 00:31:25.916 [2024-11-06 13:28:07.579591] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:25.916 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:25.916 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:25.916 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:25.916 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:25.916 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:25.916 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:25.916 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:25.917 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.917 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.917 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.917 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.917 13:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.830 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:27.830 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:27.830 00:31:27.830 real 0m14.838s 00:31:27.830 user 0m19.494s 00:31:27.830 sys 0m7.536s 00:31:27.830 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:27.830 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:27.830 ************************************ 00:31:27.830 END TEST nvmf_host_management 00:31:27.830 ************************************ 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:28.091 ************************************ 00:31:28.091 START TEST nvmf_lvol 00:31:28.091 ************************************ 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:28.091 * Looking for test storage... 00:31:28.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:28.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.091 --rc genhtml_branch_coverage=1 00:31:28.091 --rc genhtml_function_coverage=1 00:31:28.091 --rc genhtml_legend=1 00:31:28.091 --rc geninfo_all_blocks=1 00:31:28.091 --rc geninfo_unexecuted_blocks=1 00:31:28.091 00:31:28.091 ' 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:28.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.091 --rc genhtml_branch_coverage=1 00:31:28.091 --rc genhtml_function_coverage=1 00:31:28.091 --rc genhtml_legend=1 00:31:28.091 --rc geninfo_all_blocks=1 00:31:28.091 --rc geninfo_unexecuted_blocks=1 00:31:28.091 00:31:28.091 ' 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:28.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.091 --rc genhtml_branch_coverage=1 00:31:28.091 --rc genhtml_function_coverage=1 00:31:28.091 --rc genhtml_legend=1 00:31:28.091 --rc geninfo_all_blocks=1 00:31:28.091 --rc geninfo_unexecuted_blocks=1 00:31:28.091 00:31:28.091 ' 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:28.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:28.091 --rc genhtml_branch_coverage=1 00:31:28.091 --rc genhtml_function_coverage=1 00:31:28.091 --rc genhtml_legend=1 00:31:28.091 --rc geninfo_all_blocks=1 00:31:28.091 --rc geninfo_unexecuted_blocks=1 00:31:28.091 00:31:28.091 ' 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.091 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:28.352 13:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:28.352 13:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.572 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:36.573 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:36.573 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:36.573 Found net devices under 0000:31:00.0: cvl_0_0 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:36.573 Found net devices under 0000:31:00.1: cvl_0_1 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:31:36.573 00:31:36.573 --- 10.0.0.2 ping statistics --- 00:31:36.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.573 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:31:36.573 00:31:36.573 --- 10.0.0.1 ping statistics --- 00:31:36.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.573 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.573 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1951882 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1951882 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1951882 ']' 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:36.574 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:36.574 [2024-11-06 13:28:17.591647] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.574 [2024-11-06 13:28:17.592823] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:31:36.574 [2024-11-06 13:28:17.592876] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.574 [2024-11-06 13:28:17.693065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:36.574 [2024-11-06 13:28:17.744917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.574 [2024-11-06 13:28:17.744966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.574 [2024-11-06 13:28:17.744975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.574 [2024-11-06 13:28:17.744982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.574 [2024-11-06 13:28:17.744988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.574 [2024-11-06 13:28:17.746828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.574 [2024-11-06 13:28:17.747019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.574 [2024-11-06 13:28:17.747019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.574 [2024-11-06 13:28:17.823968] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.574 [2024-11-06 13:28:17.825010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:36.574 [2024-11-06 13:28:17.825498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.574 [2024-11-06 13:28:17.825623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:36.574 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:36.574 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:36.574 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.574 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:36.574 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:36.574 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.574 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:36.835 [2024-11-06 13:28:18.623955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.835 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:37.095 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:37.095 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:37.355 13:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:37.355 13:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:37.616 13:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:37.616 13:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8d225e36-85ca-4806-961f-0a548d596eb2 00:31:37.616 13:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8d225e36-85ca-4806-961f-0a548d596eb2 lvol 20 00:31:37.878 13:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a2e0c38c-f3f6-4e1e-973f-091dcd1dfbd6 00:31:37.878 13:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:38.139 13:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a2e0c38c-f3f6-4e1e-973f-091dcd1dfbd6 00:31:38.139 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:38.399 [2024-11-06 13:28:20.163917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.399 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:38.660 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1952574 00:31:38.660 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:38.660 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:39.603 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a2e0c38c-f3f6-4e1e-973f-091dcd1dfbd6 MY_SNAPSHOT 00:31:39.864 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f28c7dbb-4f54-4db0-9e2a-63729d5a79d4 00:31:39.864 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a2e0c38c-f3f6-4e1e-973f-091dcd1dfbd6 30 00:31:40.125 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f28c7dbb-4f54-4db0-9e2a-63729d5a79d4 MY_CLONE 00:31:40.385 13:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0bc615fe-e68e-4414-a6e9-6dce07a7028e 00:31:40.385 13:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0bc615fe-e68e-4414-a6e9-6dce07a7028e 00:31:40.645 13:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1952574 00:31:48.784 Initializing NVMe Controllers 00:31:48.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:48.784 Controller IO queue size 128, less than required. 00:31:48.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:48.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:48.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:48.784 Initialization complete. Launching workers. 00:31:48.784 ======================================================== 00:31:48.784 Latency(us) 00:31:48.784 Device Information : IOPS MiB/s Average min max 00:31:48.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15437.10 60.30 8294.01 1921.47 62381.78 00:31:48.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15572.70 60.83 8219.44 1812.10 85182.17 00:31:48.784 ======================================================== 00:31:48.784 Total : 31009.80 121.13 8256.56 1812.10 85182.17 00:31:48.784 00:31:48.784 13:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:49.044 13:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a2e0c38c-f3f6-4e1e-973f-091dcd1dfbd6 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d225e36-85ca-4806-961f-0a548d596eb2 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:49.305 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:49.305 rmmod nvme_tcp 00:31:49.566 rmmod nvme_fabrics 00:31:49.566 rmmod nvme_keyring 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1951882 ']' 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1951882 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1951882 ']' 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1951882 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1951882 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1951882' 00:31:49.566 killing process with pid 1951882 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1951882 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1951882 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.566 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.827 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.741 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:51.741 00:31:51.741 real 0m23.764s 00:31:51.741 user 0m55.488s 00:31:51.741 sys 0m10.824s 00:31:51.741 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:51.741 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:51.741 ************************************ 00:31:51.741 END TEST nvmf_lvol 00:31:51.741 ************************************ 00:31:51.741 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:51.741 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:51.741 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:51.741 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:51.741 ************************************ 00:31:51.741 START TEST nvmf_lvs_grow 00:31:51.741 ************************************ 00:31:51.741 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:52.003 * Looking for test storage... 00:31:52.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.003 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:52.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.004 --rc genhtml_branch_coverage=1 00:31:52.004 --rc genhtml_function_coverage=1 00:31:52.004 --rc genhtml_legend=1 00:31:52.004 --rc geninfo_all_blocks=1 00:31:52.004 --rc geninfo_unexecuted_blocks=1 00:31:52.004 00:31:52.004 ' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:52.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.004 --rc genhtml_branch_coverage=1 00:31:52.004 --rc genhtml_function_coverage=1 00:31:52.004 --rc genhtml_legend=1 00:31:52.004 --rc geninfo_all_blocks=1 00:31:52.004 --rc geninfo_unexecuted_blocks=1 00:31:52.004 00:31:52.004 ' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:52.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.004 --rc genhtml_branch_coverage=1 00:31:52.004 --rc genhtml_function_coverage=1 00:31:52.004 --rc genhtml_legend=1 00:31:52.004 --rc geninfo_all_blocks=1 00:31:52.004 --rc geninfo_unexecuted_blocks=1 00:31:52.004 00:31:52.004 ' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:52.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.004 --rc genhtml_branch_coverage=1 00:31:52.004 --rc genhtml_function_coverage=1 00:31:52.004 --rc genhtml_legend=1 00:31:52.004 --rc geninfo_all_blocks=1 00:31:52.004 --rc geninfo_unexecuted_blocks=1 00:31:52.004 00:31:52.004 ' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.004 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:00.157 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:00.157 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:00.157 Found net devices under 0000:31:00.0: cvl_0_0 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:00.157 Found net devices under 0000:31:00.1: cvl_0_1 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.157 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.157 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.157 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.157 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:32:00.158 00:32:00.158 --- 10.0.0.2 ping statistics --- 00:32:00.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.158 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:32:00.158 00:32:00.158 --- 10.0.0.1 ping statistics --- 00:32:00.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.158 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1958701 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1958701 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1958701 ']' 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:00.158 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.158 [2024-11-06 13:28:41.341987] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.158 [2024-11-06 13:28:41.343155] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:00.158 [2024-11-06 13:28:41.343206] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.158 [2024-11-06 13:28:41.443839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.158 [2024-11-06 13:28:41.489570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.158 [2024-11-06 13:28:41.489614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.158 [2024-11-06 13:28:41.489622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.158 [2024-11-06 13:28:41.489629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.158 [2024-11-06 13:28:41.489635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.158 [2024-11-06 13:28:41.490328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.158 [2024-11-06 13:28:41.554219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.158 [2024-11-06 13:28:41.554490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.418 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:00.418 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:32:00.418 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.418 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:00.418 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.418 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.418 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.679 [2024-11-06 13:28:42.439221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:00.679 ************************************ 00:32:00.679 START TEST lvs_grow_clean 00:32:00.679 ************************************ 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:00.679 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:00.680 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:00.680 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:00.680 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:00.680 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:00.941 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:00.941 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:01.203 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:01.203 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:01.203 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:01.203 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:01.203 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:01.203 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 lvol 150 00:32:01.464 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=26db3104-a555-4ad8-bc5e-079d395e21a5 00:32:01.464 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:01.464 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:01.726 [2024-11-06 13:28:43.442904] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:01.726 [2024-11-06 13:28:43.443073] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:01.726 true 00:32:01.727 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:01.727 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:01.988 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:01.988 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:01.988 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 26db3104-a555-4ad8-bc5e-079d395e21a5 00:32:02.249 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:02.511 [2024-11-06 13:28:44.199569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1959324 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1959324 /var/tmp/bdevperf.sock 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1959324 ']' 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:02.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:02.511 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:02.772 [2024-11-06 13:28:44.420343] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:02.772 [2024-11-06 13:28:44.420407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959324 ] 00:32:02.772 [2024-11-06 13:28:44.514285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.772 [2024-11-06 13:28:44.566919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.716 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:03.716 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:32:03.716 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:03.978 Nvme0n1 00:32:03.978 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:03.978 [ 00:32:03.978 { 00:32:03.978 "name": "Nvme0n1", 00:32:03.978 "aliases": [ 00:32:03.979 "26db3104-a555-4ad8-bc5e-079d395e21a5" 00:32:03.979 ], 00:32:03.979 "product_name": "NVMe disk", 00:32:03.979 "block_size": 4096, 00:32:03.979 "num_blocks": 38912, 00:32:03.979 "uuid": "26db3104-a555-4ad8-bc5e-079d395e21a5", 00:32:03.979 "numa_id": 0, 00:32:03.979 "assigned_rate_limits": { 00:32:03.979 "rw_ios_per_sec": 0, 00:32:03.979 "rw_mbytes_per_sec": 0, 00:32:03.979 "r_mbytes_per_sec": 0, 00:32:03.979 "w_mbytes_per_sec": 0 00:32:03.979 }, 00:32:03.979 "claimed": false, 00:32:03.979 "zoned": false, 00:32:03.979 "supported_io_types": { 00:32:03.979 "read": true, 00:32:03.979 "write": true, 00:32:03.979 "unmap": true, 00:32:03.979 "flush": true, 00:32:03.979 "reset": true, 00:32:03.979 "nvme_admin": true, 00:32:03.979 "nvme_io": true, 00:32:03.979 "nvme_io_md": false, 00:32:03.979 "write_zeroes": true, 00:32:03.979 "zcopy": false, 00:32:03.979 "get_zone_info": false, 00:32:03.979 "zone_management": false, 00:32:03.979 "zone_append": false, 00:32:03.979 "compare": true, 00:32:03.979 "compare_and_write": true, 00:32:03.979 "abort": true, 00:32:03.979 "seek_hole": false, 00:32:03.979 "seek_data": false, 00:32:03.979 "copy": true, 00:32:03.979 "nvme_iov_md": false 00:32:03.979 }, 00:32:03.979 "memory_domains": [ 00:32:03.979 { 00:32:03.979 "dma_device_id": "system", 00:32:03.979 "dma_device_type": 1 00:32:03.979 } 00:32:03.979 ], 00:32:03.979 "driver_specific": { 00:32:03.979 "nvme": [ 00:32:03.979 { 00:32:03.979 "trid": { 00:32:03.979 "trtype": "TCP", 00:32:03.979 "adrfam": "IPv4", 00:32:03.979 "traddr": "10.0.0.2", 00:32:03.979 "trsvcid": "4420", 00:32:03.979 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:03.979 }, 00:32:03.979 "ctrlr_data": { 00:32:03.979 "cntlid": 1, 00:32:03.979 "vendor_id": "0x8086", 00:32:03.979 "model_number": "SPDK bdev Controller", 00:32:03.979 "serial_number": "SPDK0", 00:32:03.979 "firmware_revision": "25.01", 00:32:03.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:03.979 "oacs": { 00:32:03.979 "security": 0, 00:32:03.979 "format": 0, 00:32:03.979 "firmware": 0, 00:32:03.979 "ns_manage": 0 00:32:03.979 }, 00:32:03.979 "multi_ctrlr": true, 00:32:03.979 "ana_reporting": false 00:32:03.979 }, 00:32:03.979 "vs": { 00:32:03.979 "nvme_version": "1.3" 00:32:03.979 }, 00:32:03.979 "ns_data": { 00:32:03.979 "id": 1, 00:32:03.979 "can_share": true 00:32:03.979 } 00:32:03.979 } 00:32:03.979 ], 00:32:03.979 "mp_policy": "active_passive" 00:32:03.979 } 00:32:03.979 } 00:32:03.979 ] 00:32:03.979 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1959662 00:32:03.979 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:03.979 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:04.240 Running I/O for 10 seconds... 00:32:05.184 Latency(us) 00:32:05.184 [2024-11-06T12:28:47.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.184 Nvme0n1 : 1.00 16664.00 65.09 0.00 0.00 0.00 0.00 0.00 00:32:05.184 [2024-11-06T12:28:47.086Z] =================================================================================================================== 00:32:05.184 [2024-11-06T12:28:47.086Z] Total : 16664.00 65.09 0.00 0.00 0.00 0.00 0.00 00:32:05.184 00:32:06.129 13:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:06.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.129 Nvme0n1 : 2.00 16968.00 66.28 0.00 0.00 0.00 0.00 0.00 00:32:06.129 [2024-11-06T12:28:48.031Z] =================================================================================================================== 00:32:06.129 [2024-11-06T12:28:48.031Z] Total : 16968.00 66.28 0.00 0.00 0.00 0.00 0.00 00:32:06.129 00:32:06.129 true 00:32:06.390 13:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:06.390 13:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:06.390 13:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:06.390 13:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:06.390 13:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1959662 00:32:07.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.333 Nvme0n1 : 3.00 17154.00 67.01 0.00 0.00 0.00 0.00 0.00 00:32:07.333 [2024-11-06T12:28:49.236Z] =================================================================================================================== 00:32:07.334 [2024-11-06T12:28:49.236Z] Total : 17154.00 67.01 0.00 0.00 0.00 0.00 0.00 00:32:07.334 00:32:08.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.277 Nvme0n1 : 4.00 17374.00 67.87 0.00 0.00 0.00 0.00 0.00 00:32:08.277 [2024-11-06T12:28:50.179Z] =================================================================================================================== 00:32:08.277 [2024-11-06T12:28:50.179Z] Total : 17374.00 67.87 0.00 0.00 0.00 0.00 0.00 00:32:08.277 00:32:09.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.220 Nvme0n1 : 5.00 17506.00 68.38 0.00 0.00 0.00 0.00 0.00 00:32:09.220 [2024-11-06T12:28:51.122Z] =================================================================================================================== 00:32:09.220 [2024-11-06T12:28:51.122Z] Total : 17506.00 68.38 0.00 0.00 0.00 0.00 0.00 00:32:09.220 00:32:10.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.164 Nvme0n1 : 6.00 18821.83 73.52 0.00 0.00 0.00 0.00 0.00 00:32:10.164 [2024-11-06T12:28:52.066Z] =================================================================================================================== 00:32:10.164 [2024-11-06T12:28:52.066Z] Total : 18821.83 73.52 0.00 0.00 0.00 0.00 0.00 00:32:10.164 00:32:11.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.132 Nvme0n1 : 7.00 19764.00 77.20 0.00 0.00 0.00 0.00 0.00 00:32:11.132 [2024-11-06T12:28:53.034Z] =================================================================================================================== 00:32:11.132 [2024-11-06T12:28:53.034Z] Total : 19764.00 77.20 0.00 0.00 0.00 0.00 0.00 00:32:11.132 00:32:12.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.076 Nvme0n1 : 8.00 20470.62 79.96 0.00 0.00 0.00 0.00 0.00 00:32:12.076 [2024-11-06T12:28:53.978Z] =================================================================================================================== 00:32:12.076 [2024-11-06T12:28:53.978Z] Total : 20470.62 79.96 0.00 0.00 0.00 0.00 0.00 00:32:12.076 00:32:13.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.462 Nvme0n1 : 9.00 21032.44 82.16 0.00 0.00 0.00 0.00 0.00 00:32:13.462 [2024-11-06T12:28:55.364Z] =================================================================================================================== 00:32:13.462 [2024-11-06T12:28:55.364Z] Total : 21032.44 82.16 0.00 0.00 0.00 0.00 0.00 00:32:13.462 00:32:14.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:14.404 Nvme0n1 : 10.00 21469.20 83.86 0.00 0.00 0.00 0.00 0.00 00:32:14.404 [2024-11-06T12:28:56.306Z] =================================================================================================================== 00:32:14.404 [2024-11-06T12:28:56.306Z] Total : 21469.20 83.86 0.00 0.00 0.00 0.00 0.00 00:32:14.404 00:32:14.404 00:32:14.404 Latency(us) 00:32:14.404 [2024-11-06T12:28:56.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:14.404 Nvme0n1 : 10.00 21474.99 83.89 0.00 0.00 5957.60 2990.08 29491.20 00:32:14.404 [2024-11-06T12:28:56.306Z] =================================================================================================================== 00:32:14.404 [2024-11-06T12:28:56.306Z] Total : 21474.99 83.89 0.00 0.00 5957.60 2990.08 29491.20 00:32:14.404 { 00:32:14.404 "results": [ 00:32:14.404 { 00:32:14.404 "job": "Nvme0n1", 00:32:14.404 "core_mask": "0x2", 00:32:14.404 "workload": "randwrite", 00:32:14.404 "status": "finished", 00:32:14.404 "queue_depth": 128, 00:32:14.404 "io_size": 4096, 00:32:14.404 "runtime": 10.003262, 00:32:14.404 "iops": 21474.994856677753, 00:32:14.404 "mibps": 83.88669865889747, 00:32:14.405 "io_failed": 0, 00:32:14.405 "io_timeout": 0, 00:32:14.405 "avg_latency_us": 5957.604988238215, 00:32:14.405 "min_latency_us": 2990.08, 00:32:14.405 "max_latency_us": 29491.2 00:32:14.405 } 00:32:14.405 ], 00:32:14.405 "core_count": 1 00:32:14.405 } 00:32:14.405 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1959324 00:32:14.405 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1959324 ']' 00:32:14.405 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1959324 00:32:14.405 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:32:14.405 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:14.405 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1959324 00:32:14.405 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:14.405 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:14.405 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1959324' 00:32:14.405 killing process with pid 1959324 00:32:14.405 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1959324 00:32:14.405 Received shutdown signal, test time was about 10.000000 seconds 00:32:14.405 00:32:14.405 Latency(us) 00:32:14.405 [2024-11-06T12:28:56.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.405 [2024-11-06T12:28:56.307Z] =================================================================================================================== 00:32:14.405 [2024-11-06T12:28:56.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.405 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1959324 00:32:14.405 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:14.666 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:14.666 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:14.666 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:14.927 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:14.927 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:14.927 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:14.927 [2024-11-06 13:28:56.810975] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:15.188 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:15.188 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:15.188 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:15.188 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:15.189 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.189 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:15.189 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.189 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:15.189 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.189 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:15.189 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:15.189 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:15.189 request: 00:32:15.189 { 00:32:15.189 "uuid": "1ccb8cf5-59e1-4764-9f23-eac4ef29f920", 00:32:15.189 "method": "bdev_lvol_get_lvstores", 00:32:15.189 "req_id": 1 00:32:15.189 } 00:32:15.189 Got JSON-RPC error response 00:32:15.189 response: 00:32:15.189 { 00:32:15.189 "code": -19, 00:32:15.189 "message": "No such device" 00:32:15.189 } 00:32:15.189 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:15.189 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:15.189 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:15.189 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:15.189 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:15.454 aio_bdev 00:32:15.454 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 26db3104-a555-4ad8-bc5e-079d395e21a5 00:32:15.454 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=26db3104-a555-4ad8-bc5e-079d395e21a5 00:32:15.454 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:15.454 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:15.454 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:15.454 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:15.454 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:15.718 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 26db3104-a555-4ad8-bc5e-079d395e21a5 -t 2000 00:32:15.718 [ 00:32:15.718 { 00:32:15.718 "name": "26db3104-a555-4ad8-bc5e-079d395e21a5", 00:32:15.718 "aliases": [ 00:32:15.718 "lvs/lvol" 00:32:15.718 ], 00:32:15.718 "product_name": "Logical Volume", 00:32:15.718 "block_size": 4096, 00:32:15.718 "num_blocks": 38912, 00:32:15.718 "uuid": "26db3104-a555-4ad8-bc5e-079d395e21a5", 00:32:15.719 "assigned_rate_limits": { 00:32:15.719 "rw_ios_per_sec": 0, 00:32:15.719 "rw_mbytes_per_sec": 0, 00:32:15.719 "r_mbytes_per_sec": 0, 00:32:15.719 "w_mbytes_per_sec": 0 00:32:15.719 }, 00:32:15.719 "claimed": false, 00:32:15.719 "zoned": false, 00:32:15.719 "supported_io_types": { 00:32:15.719 "read": true, 00:32:15.719 "write": true, 00:32:15.719 "unmap": true, 00:32:15.719 "flush": false, 00:32:15.719 "reset": true, 00:32:15.719 "nvme_admin": false, 00:32:15.719 "nvme_io": false, 00:32:15.719 "nvme_io_md": false, 00:32:15.719 "write_zeroes": true, 00:32:15.719 "zcopy": false, 00:32:15.719 "get_zone_info": false, 00:32:15.719 "zone_management": false, 00:32:15.719 "zone_append": false, 00:32:15.719 "compare": false, 00:32:15.719 "compare_and_write": false, 00:32:15.719 "abort": false, 00:32:15.719 "seek_hole": true, 00:32:15.719 "seek_data": true, 00:32:15.719 "copy": false, 00:32:15.719 "nvme_iov_md": false 00:32:15.719 }, 00:32:15.719 "driver_specific": { 00:32:15.719 "lvol": { 00:32:15.719 "lvol_store_uuid": "1ccb8cf5-59e1-4764-9f23-eac4ef29f920", 00:32:15.719 "base_bdev": "aio_bdev", 00:32:15.719 "thin_provision": false, 00:32:15.719 "num_allocated_clusters": 38, 00:32:15.719 "snapshot": false, 00:32:15.719 "clone": false, 00:32:15.719 "esnap_clone": false 00:32:15.719 } 00:32:15.719 } 00:32:15.719 } 00:32:15.719 ] 00:32:15.719 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:15.719 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:15.719 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:15.980 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:15.980 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:15.980 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:16.241 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:16.241 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 26db3104-a555-4ad8-bc5e-079d395e21a5 00:32:16.241 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ccb8cf5-59e1-4764-9f23-eac4ef29f920 00:32:16.502 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.764 00:32:16.764 real 0m15.986s 00:32:16.764 user 0m15.667s 00:32:16.764 sys 0m1.424s 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:16.764 ************************************ 00:32:16.764 END TEST lvs_grow_clean 00:32:16.764 ************************************ 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:16.764 ************************************ 00:32:16.764 START TEST lvs_grow_dirty 00:32:16.764 ************************************ 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:16.764 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:17.043 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:17.043 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:17.323 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:17.323 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:17.323 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:17.323 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:17.323 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:17.323 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 lvol 150 00:32:17.604 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ac1a96a7-5975-4bd9-8bed-e2395655a4e5 00:32:17.604 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:17.604 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:17.604 [2024-11-06 13:28:59.454906] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:17.604 [2024-11-06 13:28:59.455074] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:17.604 true 00:32:17.604 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:17.604 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:17.867 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:17.867 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:18.128 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ac1a96a7-5975-4bd9-8bed-e2395655a4e5 00:32:18.128 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:18.390 [2024-11-06 13:29:00.187518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.390 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:18.650 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1962402 00:32:18.650 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:18.650 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:18.650 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1962402 /var/tmp/bdevperf.sock 00:32:18.650 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1962402 ']' 00:32:18.651 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:18.651 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:18.651 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:18.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:18.651 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:18.651 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:18.651 [2024-11-06 13:29:00.421285] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:18.651 [2024-11-06 13:29:00.421343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1962402 ] 00:32:18.651 [2024-11-06 13:29:00.504782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.651 [2024-11-06 13:29:00.536112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.592 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:19.592 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:19.592 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:19.592 Nvme0n1 00:32:19.592 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:19.853 [ 00:32:19.853 { 00:32:19.853 "name": "Nvme0n1", 00:32:19.853 "aliases": [ 00:32:19.853 "ac1a96a7-5975-4bd9-8bed-e2395655a4e5" 00:32:19.853 ], 00:32:19.853 "product_name": "NVMe disk", 00:32:19.853 "block_size": 4096, 00:32:19.853 "num_blocks": 38912, 00:32:19.853 "uuid": "ac1a96a7-5975-4bd9-8bed-e2395655a4e5", 00:32:19.853 "numa_id": 0, 00:32:19.853 "assigned_rate_limits": { 00:32:19.853 "rw_ios_per_sec": 0, 00:32:19.853 "rw_mbytes_per_sec": 0, 00:32:19.853 "r_mbytes_per_sec": 0, 00:32:19.853 "w_mbytes_per_sec": 0 00:32:19.853 }, 00:32:19.853 "claimed": false, 00:32:19.853 "zoned": false, 00:32:19.853 "supported_io_types": { 00:32:19.853 "read": true, 00:32:19.853 "write": true, 00:32:19.853 "unmap": true, 00:32:19.853 "flush": true, 00:32:19.853 "reset": true, 00:32:19.853 "nvme_admin": true, 00:32:19.853 "nvme_io": true, 00:32:19.853 "nvme_io_md": false, 00:32:19.853 "write_zeroes": true, 00:32:19.853 "zcopy": false, 00:32:19.853 "get_zone_info": false, 00:32:19.853 "zone_management": false, 00:32:19.853 "zone_append": false, 00:32:19.853 "compare": true, 00:32:19.853 "compare_and_write": true, 00:32:19.854 "abort": true, 00:32:19.854 "seek_hole": false, 00:32:19.854 "seek_data": false, 00:32:19.854 "copy": true, 00:32:19.854 "nvme_iov_md": false 00:32:19.854 }, 00:32:19.854 "memory_domains": [ 00:32:19.854 { 00:32:19.854 "dma_device_id": "system", 00:32:19.854 "dma_device_type": 1 00:32:19.854 } 00:32:19.854 ], 00:32:19.854 "driver_specific": { 00:32:19.854 "nvme": [ 00:32:19.854 { 00:32:19.854 "trid": { 00:32:19.854 "trtype": "TCP", 00:32:19.854 "adrfam": "IPv4", 00:32:19.854 "traddr": "10.0.0.2", 00:32:19.854 "trsvcid": "4420", 00:32:19.854 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:19.854 }, 00:32:19.854 "ctrlr_data": { 00:32:19.854 "cntlid": 1, 00:32:19.854 "vendor_id": "0x8086", 00:32:19.854 "model_number": "SPDK bdev Controller", 00:32:19.854 "serial_number": "SPDK0", 00:32:19.854 "firmware_revision": "25.01", 00:32:19.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.854 "oacs": { 00:32:19.854 "security": 0, 00:32:19.854 "format": 0, 00:32:19.854 "firmware": 0, 00:32:19.854 "ns_manage": 0 00:32:19.854 }, 00:32:19.854 "multi_ctrlr": true, 00:32:19.854 "ana_reporting": false 00:32:19.854 }, 00:32:19.854 "vs": { 00:32:19.854 "nvme_version": "1.3" 00:32:19.854 }, 00:32:19.854 "ns_data": { 00:32:19.854 "id": 1, 00:32:19.854 "can_share": true 00:32:19.854 } 00:32:19.854 } 00:32:19.854 ], 00:32:19.854 "mp_policy": "active_passive" 00:32:19.854 } 00:32:19.854 } 00:32:19.854 ] 00:32:19.854 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1962738 00:32:19.854 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:19.854 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:19.854 Running I/O for 10 seconds... 00:32:21.238 Latency(us) 00:32:21.238 [2024-11-06T12:29:03.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.238 Nvme0n1 : 1.00 17408.00 68.00 0.00 0.00 0.00 0.00 0.00 00:32:21.238 [2024-11-06T12:29:03.140Z] =================================================================================================================== 00:32:21.238 [2024-11-06T12:29:03.140Z] Total : 17408.00 68.00 0.00 0.00 0.00 0.00 0.00 00:32:21.238 00:32:21.810 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:22.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.071 Nvme0n1 : 2.00 17657.50 68.97 0.00 0.00 0.00 0.00 0.00 00:32:22.071 [2024-11-06T12:29:03.973Z] =================================================================================================================== 00:32:22.071 [2024-11-06T12:29:03.973Z] Total : 17657.50 68.97 0.00 0.00 0.00 0.00 0.00 00:32:22.071 00:32:22.071 true 00:32:22.071 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:22.071 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:22.331 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:22.331 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:22.331 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1962738 00:32:22.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.995 Nvme0n1 : 3.00 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:32:22.995 [2024-11-06T12:29:04.897Z] =================================================================================================================== 00:32:22.995 [2024-11-06T12:29:04.897Z] Total : 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:32:22.995 00:32:23.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.938 Nvme0n1 : 4.00 17845.75 69.71 0.00 0.00 0.00 0.00 0.00 00:32:23.938 [2024-11-06T12:29:05.840Z] =================================================================================================================== 00:32:23.938 [2024-11-06T12:29:05.840Z] Total : 17845.75 69.71 0.00 0.00 0.00 0.00 0.00 00:32:23.938 00:32:24.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.880 Nvme0n1 : 5.00 18150.20 70.90 0.00 0.00 0.00 0.00 0.00 00:32:24.880 [2024-11-06T12:29:06.782Z] =================================================================================================================== 00:32:24.880 [2024-11-06T12:29:06.782Z] Total : 18150.20 70.90 0.00 0.00 0.00 0.00 0.00 00:32:24.880 00:32:26.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.265 Nvme0n1 : 6.00 19361.67 75.63 0.00 0.00 0.00 0.00 0.00 00:32:26.265 [2024-11-06T12:29:08.167Z] =================================================================================================================== 00:32:26.265 [2024-11-06T12:29:08.167Z] Total : 19361.67 75.63 0.00 0.00 0.00 0.00 0.00 00:32:26.265 00:32:27.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.208 Nvme0n1 : 7.00 20224.29 79.00 0.00 0.00 0.00 0.00 0.00 00:32:27.208 [2024-11-06T12:29:09.110Z] =================================================================================================================== 00:32:27.208 [2024-11-06T12:29:09.110Z] Total : 20224.29 79.00 0.00 0.00 0.00 0.00 0.00 00:32:27.208 00:32:28.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.150 Nvme0n1 : 8.00 20885.62 81.58 0.00 0.00 0.00 0.00 0.00 00:32:28.150 [2024-11-06T12:29:10.052Z] =================================================================================================================== 00:32:28.150 [2024-11-06T12:29:10.052Z] Total : 20885.62 81.58 0.00 0.00 0.00 0.00 0.00 00:32:28.150 00:32:29.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:29.093 Nvme0n1 : 9.00 21388.56 83.55 0.00 0.00 0.00 0.00 0.00 00:32:29.093 [2024-11-06T12:29:10.995Z] =================================================================================================================== 00:32:29.093 [2024-11-06T12:29:10.995Z] Total : 21388.56 83.55 0.00 0.00 0.00 0.00 0.00 00:32:29.093 00:32:30.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:30.034 Nvme0n1 : 10.00 21796.20 85.14 0.00 0.00 0.00 0.00 0.00 00:32:30.034 [2024-11-06T12:29:11.936Z] =================================================================================================================== 00:32:30.034 [2024-11-06T12:29:11.936Z] Total : 21796.20 85.14 0.00 0.00 0.00 0.00 0.00 00:32:30.034 00:32:30.034 00:32:30.034 Latency(us) 00:32:30.034 [2024-11-06T12:29:11.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:30.034 Nvme0n1 : 10.00 21797.18 85.15 0.00 0.00 5869.68 2880.85 28398.93 00:32:30.034 [2024-11-06T12:29:11.936Z] =================================================================================================================== 00:32:30.034 [2024-11-06T12:29:11.937Z] Total : 21797.18 85.15 0.00 0.00 5869.68 2880.85 28398.93 00:32:30.035 { 00:32:30.035 "results": [ 00:32:30.035 { 00:32:30.035 "job": "Nvme0n1", 00:32:30.035 "core_mask": "0x2", 00:32:30.035 "workload": "randwrite", 00:32:30.035 "status": "finished", 00:32:30.035 "queue_depth": 128, 00:32:30.035 "io_size": 4096, 00:32:30.035 "runtime": 10.004871, 00:32:30.035 "iops": 21797.182592359262, 00:32:30.035 "mibps": 85.14524450140337, 00:32:30.035 "io_failed": 0, 00:32:30.035 "io_timeout": 0, 00:32:30.035 "avg_latency_us": 5869.676329875854, 00:32:30.035 "min_latency_us": 2880.8533333333335, 00:32:30.035 "max_latency_us": 28398.933333333334 00:32:30.035 } 00:32:30.035 ], 00:32:30.035 "core_count": 1 00:32:30.035 } 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1962402 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1962402 ']' 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1962402 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1962402 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1962402' 00:32:30.035 killing process with pid 1962402 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1962402 00:32:30.035 Received shutdown signal, test time was about 10.000000 seconds 00:32:30.035 00:32:30.035 Latency(us) 00:32:30.035 [2024-11-06T12:29:11.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.035 [2024-11-06T12:29:11.937Z] =================================================================================================================== 00:32:30.035 [2024-11-06T12:29:11.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.035 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1962402 00:32:30.296 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:30.296 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:30.556 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:30.556 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1958701 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1958701 00:32:30.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1958701 Killed "${NVMF_APP[@]}" "$@" 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1964751 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1964751 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1964751 ']' 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:30.817 13:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:30.817 [2024-11-06 13:29:12.647114] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.817 [2024-11-06 13:29:12.648187] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:30.817 [2024-11-06 13:29:12.648230] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.078 [2024-11-06 13:29:12.738760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.078 [2024-11-06 13:29:12.768535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.078 [2024-11-06 13:29:12.768561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.078 [2024-11-06 13:29:12.768568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.078 [2024-11-06 13:29:12.768574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.078 [2024-11-06 13:29:12.768578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.078 [2024-11-06 13:29:12.769042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.078 [2024-11-06 13:29:12.819320] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:31.078 [2024-11-06 13:29:12.819502] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.648 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:31.648 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:31.648 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.648 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:31.648 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:31.648 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.648 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:31.910 [2024-11-06 13:29:13.627504] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:31.910 [2024-11-06 13:29:13.627743] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:31.910 [2024-11-06 13:29:13.627850] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:31.910 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:31.910 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ac1a96a7-5975-4bd9-8bed-e2395655a4e5 00:32:31.910 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=ac1a96a7-5975-4bd9-8bed-e2395655a4e5 00:32:31.910 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:31.910 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:31.910 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:31.910 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:31.910 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:32.171 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ac1a96a7-5975-4bd9-8bed-e2395655a4e5 -t 2000 00:32:32.171 [ 00:32:32.171 { 00:32:32.171 "name": "ac1a96a7-5975-4bd9-8bed-e2395655a4e5", 00:32:32.171 "aliases": [ 00:32:32.171 "lvs/lvol" 00:32:32.171 ], 00:32:32.171 "product_name": "Logical Volume", 00:32:32.171 "block_size": 4096, 00:32:32.171 "num_blocks": 38912, 00:32:32.171 "uuid": "ac1a96a7-5975-4bd9-8bed-e2395655a4e5", 00:32:32.171 "assigned_rate_limits": { 00:32:32.171 "rw_ios_per_sec": 0, 00:32:32.171 "rw_mbytes_per_sec": 0, 00:32:32.171 "r_mbytes_per_sec": 0, 00:32:32.171 "w_mbytes_per_sec": 0 00:32:32.171 }, 00:32:32.171 "claimed": false, 00:32:32.171 "zoned": false, 00:32:32.171 "supported_io_types": { 00:32:32.171 "read": true, 00:32:32.171 "write": true, 00:32:32.171 "unmap": true, 00:32:32.171 "flush": false, 00:32:32.171 "reset": true, 00:32:32.171 "nvme_admin": false, 00:32:32.171 "nvme_io": false, 00:32:32.171 "nvme_io_md": false, 00:32:32.171 "write_zeroes": true, 00:32:32.171 "zcopy": false, 00:32:32.171 "get_zone_info": false, 00:32:32.171 "zone_management": false, 00:32:32.171 "zone_append": false, 00:32:32.171 "compare": false, 00:32:32.171 "compare_and_write": false, 00:32:32.171 "abort": false, 00:32:32.171 "seek_hole": true, 00:32:32.171 "seek_data": true, 00:32:32.171 "copy": false, 00:32:32.171 "nvme_iov_md": false 00:32:32.171 }, 00:32:32.171 "driver_specific": { 00:32:32.171 "lvol": { 00:32:32.171 "lvol_store_uuid": "7fe15636-e01c-4bf0-a852-d30fc070eb11", 00:32:32.171 "base_bdev": "aio_bdev", 00:32:32.171 "thin_provision": false, 00:32:32.171 "num_allocated_clusters": 38, 00:32:32.171 "snapshot": false, 00:32:32.171 "clone": false, 00:32:32.171 "esnap_clone": false 00:32:32.171 } 00:32:32.171 } 00:32:32.171 } 00:32:32.171 ] 00:32:32.171 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:32.171 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:32.171 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:32.431 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:32.431 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:32.431 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:32.692 [2024-11-06 13:29:14.521515] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:32.692 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:32.952 request: 00:32:32.952 { 00:32:32.952 "uuid": "7fe15636-e01c-4bf0-a852-d30fc070eb11", 00:32:32.952 "method": "bdev_lvol_get_lvstores", 00:32:32.952 "req_id": 1 00:32:32.952 } 00:32:32.952 Got JSON-RPC error response 00:32:32.952 response: 00:32:32.952 { 00:32:32.952 "code": -19, 00:32:32.952 "message": "No such device" 00:32:32.952 } 00:32:32.952 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:32.952 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:32.952 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:32.952 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:32.952 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:33.212 aio_bdev 00:32:33.212 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ac1a96a7-5975-4bd9-8bed-e2395655a4e5 00:32:33.212 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=ac1a96a7-5975-4bd9-8bed-e2395655a4e5 00:32:33.212 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:33.212 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:33.212 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:33.212 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:33.212 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:33.212 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ac1a96a7-5975-4bd9-8bed-e2395655a4e5 -t 2000 00:32:33.473 [ 00:32:33.473 { 00:32:33.473 "name": "ac1a96a7-5975-4bd9-8bed-e2395655a4e5", 00:32:33.473 "aliases": [ 00:32:33.473 "lvs/lvol" 00:32:33.473 ], 00:32:33.473 "product_name": "Logical Volume", 00:32:33.473 "block_size": 4096, 00:32:33.473 "num_blocks": 38912, 00:32:33.473 "uuid": "ac1a96a7-5975-4bd9-8bed-e2395655a4e5", 00:32:33.473 "assigned_rate_limits": { 00:32:33.473 "rw_ios_per_sec": 0, 00:32:33.473 "rw_mbytes_per_sec": 0, 00:32:33.473 "r_mbytes_per_sec": 0, 00:32:33.473 "w_mbytes_per_sec": 0 00:32:33.473 }, 00:32:33.473 "claimed": false, 00:32:33.473 "zoned": false, 00:32:33.473 "supported_io_types": { 00:32:33.473 "read": true, 00:32:33.473 "write": true, 00:32:33.473 "unmap": true, 00:32:33.473 "flush": false, 00:32:33.473 "reset": true, 00:32:33.473 "nvme_admin": false, 00:32:33.473 "nvme_io": false, 00:32:33.473 "nvme_io_md": false, 00:32:33.473 "write_zeroes": true, 00:32:33.473 "zcopy": false, 00:32:33.473 "get_zone_info": false, 00:32:33.473 "zone_management": false, 00:32:33.473 "zone_append": false, 00:32:33.473 "compare": false, 00:32:33.473 "compare_and_write": false, 00:32:33.473 "abort": false, 00:32:33.473 "seek_hole": true, 00:32:33.473 "seek_data": true, 00:32:33.473 "copy": false, 00:32:33.473 "nvme_iov_md": false 00:32:33.473 }, 00:32:33.473 "driver_specific": { 00:32:33.473 "lvol": { 00:32:33.473 "lvol_store_uuid": "7fe15636-e01c-4bf0-a852-d30fc070eb11", 00:32:33.473 "base_bdev": "aio_bdev", 00:32:33.473 "thin_provision": false, 00:32:33.473 "num_allocated_clusters": 38, 00:32:33.473 "snapshot": false, 00:32:33.473 "clone": false, 00:32:33.473 "esnap_clone": false 00:32:33.473 } 00:32:33.473 } 00:32:33.473 } 00:32:33.473 ] 00:32:33.473 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:33.473 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:33.473 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:33.734 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:33.734 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:33.734 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:33.734 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:33.734 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac1a96a7-5975-4bd9-8bed-e2395655a4e5 00:32:33.996 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fe15636-e01c-4bf0-a852-d30fc070eb11 00:32:34.256 13:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:34.516 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:34.516 00:32:34.516 real 0m17.626s 00:32:34.516 user 0m35.491s 00:32:34.516 sys 0m3.120s 00:32:34.516 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:34.516 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:34.516 ************************************ 00:32:34.516 END TEST lvs_grow_dirty 00:32:34.516 ************************************ 00:32:34.516 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:34.516 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:34.516 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:34.516 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:34.517 nvmf_trace.0 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:34.517 rmmod nvme_tcp 00:32:34.517 rmmod nvme_fabrics 00:32:34.517 rmmod nvme_keyring 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1964751 ']' 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1964751 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1964751 ']' 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1964751 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:34.517 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1964751 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1964751' 00:32:34.777 killing process with pid 1964751 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1964751 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1964751 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.777 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.323 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:37.323 00:32:37.323 real 0m45.030s 00:32:37.323 user 0m54.208s 00:32:37.323 sys 0m10.626s 00:32:37.323 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:37.323 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:37.323 ************************************ 00:32:37.323 END TEST nvmf_lvs_grow 00:32:37.323 ************************************ 00:32:37.323 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:37.323 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:37.323 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:37.323 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:37.323 ************************************ 00:32:37.323 START TEST nvmf_bdev_io_wait 00:32:37.323 ************************************ 00:32:37.323 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:37.323 * Looking for test storage... 00:32:37.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:37.323 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:37.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.324 --rc genhtml_branch_coverage=1 00:32:37.324 --rc genhtml_function_coverage=1 00:32:37.324 --rc genhtml_legend=1 00:32:37.324 --rc geninfo_all_blocks=1 00:32:37.324 --rc geninfo_unexecuted_blocks=1 00:32:37.324 00:32:37.324 ' 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:37.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.324 --rc genhtml_branch_coverage=1 00:32:37.324 --rc genhtml_function_coverage=1 00:32:37.324 --rc genhtml_legend=1 00:32:37.324 --rc geninfo_all_blocks=1 00:32:37.324 --rc geninfo_unexecuted_blocks=1 00:32:37.324 00:32:37.324 ' 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:37.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.324 --rc genhtml_branch_coverage=1 00:32:37.324 --rc genhtml_function_coverage=1 00:32:37.324 --rc genhtml_legend=1 00:32:37.324 --rc geninfo_all_blocks=1 00:32:37.324 --rc geninfo_unexecuted_blocks=1 00:32:37.324 00:32:37.324 ' 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:37.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.324 --rc genhtml_branch_coverage=1 00:32:37.324 --rc genhtml_function_coverage=1 00:32:37.324 --rc genhtml_legend=1 00:32:37.324 --rc geninfo_all_blocks=1 00:32:37.324 --rc geninfo_unexecuted_blocks=1 00:32:37.324 00:32:37.324 ' 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:37.324 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:37.325 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:45.480 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:45.481 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.481 13:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:45.481 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:45.481 Found net devices under 0000:31:00.0: cvl_0_0 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:45.481 Found net devices under 0000:31:00.1: cvl_0_1 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:45.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:32:45.481 00:32:45.481 --- 10.0.0.2 ping statistics --- 00:32:45.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.481 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:32:45.481 00:32:45.481 --- 10.0.0.1 ping statistics --- 00:32:45.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.481 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.481 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1969647 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1969647 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1969647 ']' 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:45.482 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.482 [2024-11-06 13:29:26.423387] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.482 [2024-11-06 13:29:26.424383] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:45.482 [2024-11-06 13:29:26.424418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.482 [2024-11-06 13:29:26.519527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:45.482 [2024-11-06 13:29:26.557287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.482 [2024-11-06 13:29:26.557317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.482 [2024-11-06 13:29:26.557326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.482 [2024-11-06 13:29:26.557333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.482 [2024-11-06 13:29:26.557338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.482 [2024-11-06 13:29:26.559024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.482 [2024-11-06 13:29:26.559177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:45.482 [2024-11-06 13:29:26.559492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:45.482 [2024-11-06 13:29:26.559493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.482 [2024-11-06 13:29:26.559856] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.482 [2024-11-06 13:29:27.310487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:45.482 [2024-11-06 13:29:27.310714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:45.482 [2024-11-06 13:29:27.311018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:45.482 [2024-11-06 13:29:27.311151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.482 [2024-11-06 13:29:27.320349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.482 Malloc0 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.482 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:45.744 [2024-11-06 13:29:27.392590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1969878 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1969880 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1969882 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.744 { 00:32:45.744 "params": { 00:32:45.744 "name": "Nvme$subsystem", 00:32:45.744 "trtype": "$TEST_TRANSPORT", 00:32:45.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.744 "adrfam": "ipv4", 00:32:45.744 "trsvcid": "$NVMF_PORT", 00:32:45.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.744 "hdgst": ${hdgst:-false}, 00:32:45.744 "ddgst": ${ddgst:-false} 00:32:45.744 }, 00:32:45.744 "method": "bdev_nvme_attach_controller" 00:32:45.744 } 00:32:45.744 EOF 00:32:45.744 )") 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1969884 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.744 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.744 { 00:32:45.744 "params": { 00:32:45.744 "name": "Nvme$subsystem", 00:32:45.744 "trtype": "$TEST_TRANSPORT", 00:32:45.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.744 "adrfam": "ipv4", 00:32:45.744 "trsvcid": "$NVMF_PORT", 00:32:45.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.744 "hdgst": ${hdgst:-false}, 00:32:45.745 "ddgst": ${ddgst:-false} 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 } 00:32:45.745 EOF 00:32:45.745 )") 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.745 { 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme$subsystem", 00:32:45.745 "trtype": "$TEST_TRANSPORT", 00:32:45.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "$NVMF_PORT", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.745 "hdgst": ${hdgst:-false}, 00:32:45.745 "ddgst": ${ddgst:-false} 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 } 00:32:45.745 EOF 00:32:45.745 )") 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:45.745 { 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme$subsystem", 00:32:45.745 "trtype": "$TEST_TRANSPORT", 00:32:45.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "$NVMF_PORT", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:45.745 "hdgst": ${hdgst:-false}, 00:32:45.745 "ddgst": ${ddgst:-false} 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 } 00:32:45.745 EOF 00:32:45.745 )") 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1969878 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme1", 00:32:45.745 "trtype": "tcp", 00:32:45.745 "traddr": "10.0.0.2", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "4420", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.745 "hdgst": false, 00:32:45.745 "ddgst": false 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 }' 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme1", 00:32:45.745 "trtype": "tcp", 00:32:45.745 "traddr": "10.0.0.2", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "4420", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.745 "hdgst": false, 00:32:45.745 "ddgst": false 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 }' 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme1", 00:32:45.745 "trtype": "tcp", 00:32:45.745 "traddr": "10.0.0.2", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "4420", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.745 "hdgst": false, 00:32:45.745 "ddgst": false 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 }' 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:45.745 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:45.745 "params": { 00:32:45.745 "name": "Nvme1", 00:32:45.745 "trtype": "tcp", 00:32:45.745 "traddr": "10.0.0.2", 00:32:45.745 "adrfam": "ipv4", 00:32:45.745 "trsvcid": "4420", 00:32:45.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:45.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:45.745 "hdgst": false, 00:32:45.745 "ddgst": false 00:32:45.745 }, 00:32:45.745 "method": "bdev_nvme_attach_controller" 00:32:45.745 }' 00:32:45.745 [2024-11-06 13:29:27.447851] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:45.745 [2024-11-06 13:29:27.447903] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:45.745 [2024-11-06 13:29:27.449607] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:45.745 [2024-11-06 13:29:27.449655] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:45.745 [2024-11-06 13:29:27.449960] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:45.745 [2024-11-06 13:29:27.450006] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:45.745 [2024-11-06 13:29:27.450775] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:45.745 [2024-11-06 13:29:27.450820] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:45.745 [2024-11-06 13:29:27.605952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.745 [2024-11-06 13:29:27.643216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:46.007 [2024-11-06 13:29:27.655655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.007 [2024-11-06 13:29:27.691524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:46.007 [2024-11-06 13:29:27.717070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.007 [2024-11-06 13:29:27.750548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:46.007 [2024-11-06 13:29:27.775613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.007 [2024-11-06 13:29:27.813500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:46.007 Running I/O for 1 seconds... 00:32:46.269 Running I/O for 1 seconds... 00:32:46.269 Running I/O for 1 seconds... 00:32:46.269 Running I/O for 1 seconds... 00:32:47.211 7624.00 IOPS, 29.78 MiB/s 00:32:47.211 Latency(us) 00:32:47.211 [2024-11-06T12:29:29.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.211 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:47.211 Nvme1n1 : 1.02 7636.93 29.83 0.00 0.00 16613.31 5816.32 27743.57 00:32:47.211 [2024-11-06T12:29:29.113Z] =================================================================================================================== 00:32:47.211 [2024-11-06T12:29:29.113Z] Total : 7636.93 29.83 0.00 0.00 16613.31 5816.32 27743.57 00:32:47.211 11827.00 IOPS, 46.20 MiB/s 00:32:47.211 Latency(us) 00:32:47.211 [2024-11-06T12:29:29.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.211 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:47.211 Nvme1n1 : 1.01 11898.09 46.48 0.00 0.00 10721.32 2266.45 17148.59 00:32:47.211 [2024-11-06T12:29:29.113Z] =================================================================================================================== 00:32:47.211 [2024-11-06T12:29:29.113Z] Total : 11898.09 46.48 0.00 0.00 10721.32 2266.45 17148.59 00:32:47.211 8209.00 IOPS, 32.07 MiB/s 00:32:47.211 Latency(us) 00:32:47.211 [2024-11-06T12:29:29.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.211 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:47.212 Nvme1n1 : 1.01 8344.82 32.60 0.00 0.00 15301.44 3372.37 32331.09 00:32:47.212 [2024-11-06T12:29:29.114Z] =================================================================================================================== 00:32:47.212 [2024-11-06T12:29:29.114Z] Total : 8344.82 32.60 0.00 0.00 15301.44 3372.37 32331.09 00:32:47.212 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1969880 00:32:47.212 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1969882 00:32:47.212 185384.00 IOPS, 724.16 MiB/s 00:32:47.212 Latency(us) 00:32:47.212 [2024-11-06T12:29:29.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.212 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:47.212 Nvme1n1 : 1.00 185017.89 722.73 0.00 0.00 688.00 300.37 1966.08 00:32:47.212 [2024-11-06T12:29:29.114Z] =================================================================================================================== 00:32:47.212 [2024-11-06T12:29:29.114Z] Total : 185017.89 722.73 0.00 0.00 688.00 300.37 1966.08 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1969884 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:47.473 rmmod nvme_tcp 00:32:47.473 rmmod nvme_fabrics 00:32:47.473 rmmod nvme_keyring 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1969647 ']' 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1969647 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1969647 ']' 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1969647 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1969647 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1969647' 00:32:47.473 killing process with pid 1969647 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1969647 00:32:47.473 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1969647 00:32:47.734 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:47.734 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:47.734 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:47.734 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:47.734 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:47.734 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:47.734 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:47.735 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:47.735 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:47.735 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.735 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.735 13:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.647 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.647 00:32:49.647 real 0m12.761s 00:32:49.647 user 0m15.145s 00:32:49.647 sys 0m7.297s 00:32:49.647 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:49.647 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:49.647 ************************************ 00:32:49.647 END TEST nvmf_bdev_io_wait 00:32:49.647 ************************************ 00:32:49.648 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:49.648 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:49.648 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:49.648 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:49.909 ************************************ 00:32:49.909 START TEST nvmf_queue_depth 00:32:49.909 ************************************ 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:49.909 * Looking for test storage... 00:32:49.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:49.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.909 --rc genhtml_branch_coverage=1 00:32:49.909 --rc genhtml_function_coverage=1 00:32:49.909 --rc genhtml_legend=1 00:32:49.909 --rc geninfo_all_blocks=1 00:32:49.909 --rc geninfo_unexecuted_blocks=1 00:32:49.909 00:32:49.909 ' 00:32:49.909 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:49.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.910 --rc genhtml_branch_coverage=1 00:32:49.910 --rc genhtml_function_coverage=1 00:32:49.910 --rc genhtml_legend=1 00:32:49.910 --rc geninfo_all_blocks=1 00:32:49.910 --rc geninfo_unexecuted_blocks=1 00:32:49.910 00:32:49.910 ' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:49.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.910 --rc genhtml_branch_coverage=1 00:32:49.910 --rc genhtml_function_coverage=1 00:32:49.910 --rc genhtml_legend=1 00:32:49.910 --rc geninfo_all_blocks=1 00:32:49.910 --rc geninfo_unexecuted_blocks=1 00:32:49.910 00:32:49.910 ' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:49.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.910 --rc genhtml_branch_coverage=1 00:32:49.910 --rc genhtml_function_coverage=1 00:32:49.910 --rc genhtml_legend=1 00:32:49.910 --rc geninfo_all_blocks=1 00:32:49.910 --rc geninfo_unexecuted_blocks=1 00:32:49.910 00:32:49.910 ' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.910 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.170 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.170 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.170 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.170 13:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:58.311 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:58.311 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:58.311 Found net devices under 0000:31:00.0: cvl_0_0 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:58.311 Found net devices under 0000:31:00.1: cvl_0_1 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:58.311 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.312 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:32:58.312 00:32:58.312 --- 10.0.0.2 ping statistics --- 00:32:58.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.312 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:32:58.312 00:32:58.312 --- 10.0.0.1 ping statistics --- 00:32:58.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.312 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1974401 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1974401 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1974401 ']' 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.312 [2024-11-06 13:29:39.136361] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:58.312 [2024-11-06 13:29:39.137355] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:58.312 [2024-11-06 13:29:39.137393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.312 [2024-11-06 13:29:39.234334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.312 [2024-11-06 13:29:39.277453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.312 [2024-11-06 13:29:39.277502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.312 [2024-11-06 13:29:39.277511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.312 [2024-11-06 13:29:39.277518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.312 [2024-11-06 13:29:39.277524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.312 [2024-11-06 13:29:39.278289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.312 [2024-11-06 13:29:39.352483] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:58.312 [2024-11-06 13:29:39.352788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.312 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.312 [2024-11-06 13:29:39.995135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.312 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.312 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:58.312 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.312 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.312 Malloc0 00:32:58.312 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.312 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:58.312 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.312 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.313 [2024-11-06 13:29:40.079311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1974624 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1974624 /var/tmp/bdevperf.sock 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1974624 ']' 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:58.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:58.313 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:58.313 [2024-11-06 13:29:40.137807] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:32:58.313 [2024-11-06 13:29:40.137879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974624 ] 00:32:58.573 [2024-11-06 13:29:40.234165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.573 [2024-11-06 13:29:40.286605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.144 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:59.144 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:59.144 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:59.144 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.144 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:59.404 NVMe0n1 00:32:59.404 13:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.404 13:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:59.404 Running I/O for 10 seconds... 00:33:01.358 8315.00 IOPS, 32.48 MiB/s [2024-11-06T12:29:44.643Z] 8711.00 IOPS, 34.03 MiB/s [2024-11-06T12:29:45.583Z] 9562.00 IOPS, 37.35 MiB/s [2024-11-06T12:29:46.524Z] 10496.00 IOPS, 41.00 MiB/s [2024-11-06T12:29:47.464Z] 11073.60 IOPS, 43.26 MiB/s [2024-11-06T12:29:48.405Z] 11494.33 IOPS, 44.90 MiB/s [2024-11-06T12:29:49.346Z] 11824.29 IOPS, 46.19 MiB/s [2024-11-06T12:29:50.286Z] 12037.38 IOPS, 47.02 MiB/s [2024-11-06T12:29:51.669Z] 12202.11 IOPS, 47.66 MiB/s [2024-11-06T12:29:51.669Z] 12381.80 IOPS, 48.37 MiB/s 00:33:09.767 Latency(us) 00:33:09.767 [2024-11-06T12:29:51.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.768 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:09.768 Verification LBA range: start 0x0 length 0x4000 00:33:09.768 NVMe0n1 : 10.06 12405.53 48.46 0.00 0.00 82246.71 24685.23 75147.95 00:33:09.768 [2024-11-06T12:29:51.670Z] =================================================================================================================== 00:33:09.768 [2024-11-06T12:29:51.670Z] Total : 12405.53 48.46 0.00 0.00 82246.71 24685.23 75147.95 00:33:09.768 { 00:33:09.768 "results": [ 00:33:09.768 { 00:33:09.768 "job": "NVMe0n1", 00:33:09.768 "core_mask": "0x1", 00:33:09.768 "workload": "verify", 00:33:09.768 "status": "finished", 00:33:09.768 "verify_range": { 00:33:09.768 "start": 0, 00:33:09.768 "length": 16384 00:33:09.768 }, 00:33:09.768 "queue_depth": 1024, 00:33:09.768 "io_size": 4096, 00:33:09.768 "runtime": 10.058172, 00:33:09.768 "iops": 12405.534524563707, 00:33:09.768 "mibps": 48.45911923657698, 00:33:09.768 "io_failed": 0, 00:33:09.768 "io_timeout": 0, 00:33:09.768 "avg_latency_us": 82246.7070206849, 00:33:09.768 "min_latency_us": 24685.226666666666, 00:33:09.768 "max_latency_us": 75147.94666666667 00:33:09.768 } 00:33:09.768 ], 00:33:09.768 "core_count": 1 00:33:09.768 } 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1974624 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1974624 ']' 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1974624 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1974624 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1974624' 00:33:09.768 killing process with pid 1974624 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1974624 00:33:09.768 Received shutdown signal, test time was about 10.000000 seconds 00:33:09.768 00:33:09.768 Latency(us) 00:33:09.768 [2024-11-06T12:29:51.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.768 [2024-11-06T12:29:51.670Z] =================================================================================================================== 00:33:09.768 [2024-11-06T12:29:51.670Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1974624 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:09.768 rmmod nvme_tcp 00:33:09.768 rmmod nvme_fabrics 00:33:09.768 rmmod nvme_keyring 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1974401 ']' 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1974401 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1974401 ']' 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1974401 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1974401 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1974401' 00:33:09.768 killing process with pid 1974401 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1974401 00:33:09.768 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1974401 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.029 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.574 13:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:12.574 00:33:12.574 real 0m22.274s 00:33:12.574 user 0m24.711s 00:33:12.574 sys 0m7.204s 00:33:12.574 13:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:12.574 13:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:12.574 ************************************ 00:33:12.574 END TEST nvmf_queue_depth 00:33:12.574 ************************************ 00:33:12.574 13:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:12.574 13:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:12.574 13:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:12.574 13:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:12.574 ************************************ 00:33:12.574 START TEST nvmf_target_multipath 00:33:12.574 ************************************ 00:33:12.574 13:29:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:12.574 * Looking for test storage... 00:33:12.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:12.574 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:12.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.575 --rc genhtml_branch_coverage=1 00:33:12.575 --rc genhtml_function_coverage=1 00:33:12.575 --rc genhtml_legend=1 00:33:12.575 --rc geninfo_all_blocks=1 00:33:12.575 --rc geninfo_unexecuted_blocks=1 00:33:12.575 00:33:12.575 ' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:12.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.575 --rc genhtml_branch_coverage=1 00:33:12.575 --rc genhtml_function_coverage=1 00:33:12.575 --rc genhtml_legend=1 00:33:12.575 --rc geninfo_all_blocks=1 00:33:12.575 --rc geninfo_unexecuted_blocks=1 00:33:12.575 00:33:12.575 ' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:12.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.575 --rc genhtml_branch_coverage=1 00:33:12.575 --rc genhtml_function_coverage=1 00:33:12.575 --rc genhtml_legend=1 00:33:12.575 --rc geninfo_all_blocks=1 00:33:12.575 --rc geninfo_unexecuted_blocks=1 00:33:12.575 00:33:12.575 ' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:12.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:12.575 --rc genhtml_branch_coverage=1 00:33:12.575 --rc genhtml_function_coverage=1 00:33:12.575 --rc genhtml_legend=1 00:33:12.575 --rc geninfo_all_blocks=1 00:33:12.575 --rc geninfo_unexecuted_blocks=1 00:33:12.575 00:33:12.575 ' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:12.575 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:12.576 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:12.576 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:12.576 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.576 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.576 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:12.576 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:12.576 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:12.576 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:12.576 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:20.716 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:20.716 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.716 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:20.717 Found net devices under 0000:31:00.0: cvl_0_0 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:20.717 Found net devices under 0000:31:00.1: cvl_0_1 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:20.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:33:20.717 00:33:20.717 --- 10.0.0.2 ping statistics --- 00:33:20.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.717 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:33:20.717 00:33:20.717 --- 10.0.0.1 ping statistics --- 00:33:20.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.717 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:20.717 only one NIC for nvmf test 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.717 rmmod nvme_tcp 00:33:20.717 rmmod nvme_fabrics 00:33:20.717 rmmod nvme_keyring 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.717 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.102 00:33:22.102 real 0m9.751s 00:33:22.102 user 0m2.153s 00:33:22.102 sys 0m5.525s 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:22.102 ************************************ 00:33:22.102 END TEST nvmf_target_multipath 00:33:22.102 ************************************ 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.102 ************************************ 00:33:22.102 START TEST nvmf_zcopy 00:33:22.102 ************************************ 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:22.102 * Looking for test storage... 00:33:22.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.102 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.103 --rc genhtml_branch_coverage=1 00:33:22.103 --rc genhtml_function_coverage=1 00:33:22.103 --rc genhtml_legend=1 00:33:22.103 --rc geninfo_all_blocks=1 00:33:22.103 --rc geninfo_unexecuted_blocks=1 00:33:22.103 00:33:22.103 ' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.103 --rc genhtml_branch_coverage=1 00:33:22.103 --rc genhtml_function_coverage=1 00:33:22.103 --rc genhtml_legend=1 00:33:22.103 --rc geninfo_all_blocks=1 00:33:22.103 --rc geninfo_unexecuted_blocks=1 00:33:22.103 00:33:22.103 ' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.103 --rc genhtml_branch_coverage=1 00:33:22.103 --rc genhtml_function_coverage=1 00:33:22.103 --rc genhtml_legend=1 00:33:22.103 --rc geninfo_all_blocks=1 00:33:22.103 --rc geninfo_unexecuted_blocks=1 00:33:22.103 00:33:22.103 ' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.103 --rc genhtml_branch_coverage=1 00:33:22.103 --rc genhtml_function_coverage=1 00:33:22.103 --rc genhtml_legend=1 00:33:22.103 --rc geninfo_all_blocks=1 00:33:22.103 --rc geninfo_unexecuted_blocks=1 00:33:22.103 00:33:22.103 ' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.103 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.104 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.104 13:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.245 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.245 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.245 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.245 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.245 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.245 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.245 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:30.246 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:30.246 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:30.246 Found net devices under 0000:31:00.0: cvl_0_0 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:30.246 Found net devices under 0000:31:00.1: cvl_0_1 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.246 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:33:30.246 00:33:30.246 --- 10.0.0.2 ping statistics --- 00:33:30.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.246 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:33:30.246 00:33:30.246 --- 10.0.0.1 ping statistics --- 00:33:30.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.246 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.246 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1985537 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1985537 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1985537 ']' 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:30.247 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.247 [2024-11-06 13:30:11.397727] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:30.247 [2024-11-06 13:30:11.398737] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:33:30.247 [2024-11-06 13:30:11.398788] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.247 [2024-11-06 13:30:11.490943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.247 [2024-11-06 13:30:11.526436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.247 [2024-11-06 13:30:11.526469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.247 [2024-11-06 13:30:11.526477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.247 [2024-11-06 13:30:11.526484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.247 [2024-11-06 13:30:11.526490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.247 [2024-11-06 13:30:11.527060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.247 [2024-11-06 13:30:11.581767] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:30.247 [2024-11-06 13:30:11.582011] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.508 [2024-11-06 13:30:12.251853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.508 [2024-11-06 13:30:12.280120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.508 malloc0 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.508 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:30.509 { 00:33:30.509 "params": { 00:33:30.509 "name": "Nvme$subsystem", 00:33:30.509 "trtype": "$TEST_TRANSPORT", 00:33:30.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.509 "adrfam": "ipv4", 00:33:30.509 "trsvcid": "$NVMF_PORT", 00:33:30.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.509 "hdgst": ${hdgst:-false}, 00:33:30.509 "ddgst": ${ddgst:-false} 00:33:30.509 }, 00:33:30.509 "method": "bdev_nvme_attach_controller" 00:33:30.509 } 00:33:30.509 EOF 00:33:30.509 )") 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:30.509 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:30.509 "params": { 00:33:30.509 "name": "Nvme1", 00:33:30.509 "trtype": "tcp", 00:33:30.509 "traddr": "10.0.0.2", 00:33:30.509 "adrfam": "ipv4", 00:33:30.509 "trsvcid": "4420", 00:33:30.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:30.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:30.509 "hdgst": false, 00:33:30.509 "ddgst": false 00:33:30.509 }, 00:33:30.509 "method": "bdev_nvme_attach_controller" 00:33:30.509 }' 00:33:30.509 [2024-11-06 13:30:12.381397] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:33:30.509 [2024-11-06 13:30:12.381465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985961 ] 00:33:30.770 [2024-11-06 13:30:12.457306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.770 [2024-11-06 13:30:12.509828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.031 Running I/O for 10 seconds... 00:33:32.913 6571.00 IOPS, 51.34 MiB/s [2024-11-06T12:30:16.201Z] 6605.50 IOPS, 51.61 MiB/s [2024-11-06T12:30:17.143Z] 6597.33 IOPS, 51.54 MiB/s [2024-11-06T12:30:18.084Z] 6615.50 IOPS, 51.68 MiB/s [2024-11-06T12:30:19.026Z] 6894.60 IOPS, 53.86 MiB/s [2024-11-06T12:30:19.965Z] 7356.83 IOPS, 57.48 MiB/s [2024-11-06T12:30:20.907Z] 7680.00 IOPS, 60.00 MiB/s [2024-11-06T12:30:21.848Z] 7923.38 IOPS, 61.90 MiB/s [2024-11-06T12:30:23.233Z] 8116.78 IOPS, 63.41 MiB/s [2024-11-06T12:30:23.233Z] 8270.60 IOPS, 64.61 MiB/s 00:33:41.331 Latency(us) 00:33:41.331 [2024-11-06T12:30:23.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.331 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:41.331 Verification LBA range: start 0x0 length 0x1000 00:33:41.331 Nvme1n1 : 10.01 8275.44 64.65 0.00 0.00 15422.55 2225.49 27634.35 00:33:41.331 [2024-11-06T12:30:23.233Z] =================================================================================================================== 00:33:41.331 [2024-11-06T12:30:23.233Z] Total : 8275.44 64.65 0.00 0.00 15422.55 2225.49 27634.35 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1987966 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:41.331 { 00:33:41.331 "params": { 00:33:41.331 "name": "Nvme$subsystem", 00:33:41.331 "trtype": "$TEST_TRANSPORT", 00:33:41.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.331 "adrfam": "ipv4", 00:33:41.331 "trsvcid": "$NVMF_PORT", 00:33:41.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.331 "hdgst": ${hdgst:-false}, 00:33:41.331 "ddgst": ${ddgst:-false} 00:33:41.331 }, 00:33:41.331 "method": "bdev_nvme_attach_controller" 00:33:41.331 } 00:33:41.331 EOF 00:33:41.331 )") 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:41.331 [2024-11-06 13:30:22.935377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:22.935408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:41.331 13:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:41.331 "params": { 00:33:41.331 "name": "Nvme1", 00:33:41.331 "trtype": "tcp", 00:33:41.331 "traddr": "10.0.0.2", 00:33:41.331 "adrfam": "ipv4", 00:33:41.331 "trsvcid": "4420", 00:33:41.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:41.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:41.331 "hdgst": false, 00:33:41.331 "ddgst": false 00:33:41.331 }, 00:33:41.331 "method": "bdev_nvme_attach_controller" 00:33:41.331 }' 00:33:41.331 [2024-11-06 13:30:22.947348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:22.947358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:22.959345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:22.959352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:22.971344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:22.971353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:22.976853] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:33:41.331 [2024-11-06 13:30:22.976900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987966 ] 00:33:41.331 [2024-11-06 13:30:22.983345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:22.983353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:22.995345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:22.995353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.007346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.007354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.019344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.019352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.031345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.031352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.043344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.043352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.055345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.055352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.059525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.331 [2024-11-06 13:30:23.067346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.067354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.079345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.079354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.088946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.331 [2024-11-06 13:30:23.091345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.091356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.103349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.103358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.115351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.115363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.127348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.127360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.139346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.139355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.151344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.151352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.163355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.163371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.175349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.175358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.331 [2024-11-06 13:30:23.187347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.331 [2024-11-06 13:30:23.187359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.332 [2024-11-06 13:30:23.199345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.332 [2024-11-06 13:30:23.199352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.332 [2024-11-06 13:30:23.211344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.332 [2024-11-06 13:30:23.211352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.332 [2024-11-06 13:30:23.223345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.332 [2024-11-06 13:30:23.223354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.592 [2024-11-06 13:30:23.235347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.592 [2024-11-06 13:30:23.235357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.592 [2024-11-06 13:30:23.247346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.592 [2024-11-06 13:30:23.247356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.592 [2024-11-06 13:30:23.259351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.592 [2024-11-06 13:30:23.259366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.592 Running I/O for 5 seconds... 00:33:41.592 [2024-11-06 13:30:23.275305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.592 [2024-11-06 13:30:23.275322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.592 [2024-11-06 13:30:23.288605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.592 [2024-11-06 13:30:23.288622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.302924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.302940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.315922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.315938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.330515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.330535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.343335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.343351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.356105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.356120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.370498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.370513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.383344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.383359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.396078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.396093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.410644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.410659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.423750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.423765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.438297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.438313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.451380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.451396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.463821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.463835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.479054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.479068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.593 [2024-11-06 13:30:23.491888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.593 [2024-11-06 13:30:23.491903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.507181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.507196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.520039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.520054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.534409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.534425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.547311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.547326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.560645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.560660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.574867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.574882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.587738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.587761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.602464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.602480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.615448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.615463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.628494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.628508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.642264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.642279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.655039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.655055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.667947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.667961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.682363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.682378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.695175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.695190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.854 [2024-11-06 13:30:23.707941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.854 [2024-11-06 13:30:23.707956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.855 [2024-11-06 13:30:23.722604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.855 [2024-11-06 13:30:23.722620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.855 [2024-11-06 13:30:23.735688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.855 [2024-11-06 13:30:23.735703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:41.855 [2024-11-06 13:30:23.750805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:41.855 [2024-11-06 13:30:23.750819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.763882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.763898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.778266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.778282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.791119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.791135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.804830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.804846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.818916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.818932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.831922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.831937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.846247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.846263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.859161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.859177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.872400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.872415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.886787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.886803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.899709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.899724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.914299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.914316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.927690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.927705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.942689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.942705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.955624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.955639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.970240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.970257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.983347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.983363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:23.996495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:23.996510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.116 [2024-11-06 13:30:24.010821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.116 [2024-11-06 13:30:24.010837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.023571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.023587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.036418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.036433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.050580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.050595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.063952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.063968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.078207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.078223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.091202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.091217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.104031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.104046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.119121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.119136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.132341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.132356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.146445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.146460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.159738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.159756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.174685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.174700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.187852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.187866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.202449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.202463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.215698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.215712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.230881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.230897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.243856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.243870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 [2024-11-06 13:30:24.258256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.258271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.377 19091.00 IOPS, 149.15 MiB/s [2024-11-06T12:30:24.279Z] [2024-11-06 13:30:24.271242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.377 [2024-11-06 13:30:24.271257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.637 [2024-11-06 13:30:24.284828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.637 [2024-11-06 13:30:24.284844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.298695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.298711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.311758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.311772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.326448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.326463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.339295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.339310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.352566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.352581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.366614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.366629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.379657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.379671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.394679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.394694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.407853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.407867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.422438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.422453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.435731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.435750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.450233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.450248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.462896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.462911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.476491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.476506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.490803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.490818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.504062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.504077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.519043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.519058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.638 [2024-11-06 13:30:24.532005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.638 [2024-11-06 13:30:24.532020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.546550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.546565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.559637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.559651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.572537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.572551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.586316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.586330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.599582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.599597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.612364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.612382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.627072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.627087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.640206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.640220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.654508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.654523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.667447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.667462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.680345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.680360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.694368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.694382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.707627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.707641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.722342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.722356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.735725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.735739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.749896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.749910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.762932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.762947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.776071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.898 [2024-11-06 13:30:24.776085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:42.898 [2024-11-06 13:30:24.790228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:42.899 [2024-11-06 13:30:24.790242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.803026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.803041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.815717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.815732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.830450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.830465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.843786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.843806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.858365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.858380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.871639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.871657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.886697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.886712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.899710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.899724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.913920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.913935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.926890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.926905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.939542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.939557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.952398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.952412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.966794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.966809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.979594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.979609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:24.991866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:24.991880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:25.006088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:25.006103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:25.019191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:25.019206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:25.031720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:25.031734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.160 [2024-11-06 13:30:25.046351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.160 [2024-11-06 13:30:25.046365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.420 [2024-11-06 13:30:25.059475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.420 [2024-11-06 13:30:25.059491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.420 [2024-11-06 13:30:25.072235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.072249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.086758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.086772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.099739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.099758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.114886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.114901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.127933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.127951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.142434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.142449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.155372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.155386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.168290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.168304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.181996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.182011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.195190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.195205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.207821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.207835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.222721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.222736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.235911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.235925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.250489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.250504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.263361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.263375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 19088.50 IOPS, 149.13 MiB/s [2024-11-06T12:30:25.323Z] [2024-11-06 13:30:25.276479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.276493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.290791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.290806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.303834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.303849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.421 [2024-11-06 13:30:25.318285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.421 [2024-11-06 13:30:25.318300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.331079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.331094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.343857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.343871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.358946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.358961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.371578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.371593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.383775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.383789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.398978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.398993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.411962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.411976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.426566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.426581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.439820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.439834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.454251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.454265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.467091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.467106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.479664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.479678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.494335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.494350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.507527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.507541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.520175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.520189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.534471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.682 [2024-11-06 13:30:25.534486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.682 [2024-11-06 13:30:25.547492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.683 [2024-11-06 13:30:25.547507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.683 [2024-11-06 13:30:25.560179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.683 [2024-11-06 13:30:25.560194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.683 [2024-11-06 13:30:25.574489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.683 [2024-11-06 13:30:25.574504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.943 [2024-11-06 13:30:25.587683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.943 [2024-11-06 13:30:25.587697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.943 [2024-11-06 13:30:25.602497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.943 [2024-11-06 13:30:25.602512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.943 [2024-11-06 13:30:25.615493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.943 [2024-11-06 13:30:25.615508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.943 [2024-11-06 13:30:25.628171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.943 [2024-11-06 13:30:25.628185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.943 [2024-11-06 13:30:25.642804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.943 [2024-11-06 13:30:25.642818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.943 [2024-11-06 13:30:25.656340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.943 [2024-11-06 13:30:25.656354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.943 [2024-11-06 13:30:25.670845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.943 [2024-11-06 13:30:25.670860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.943 [2024-11-06 13:30:25.683681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.943 [2024-11-06 13:30:25.683695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.698360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.698374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.711251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.711265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.724253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.724267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.738678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.738693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.751638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.751652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.766637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.766652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.780064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.780079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.794540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.794554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.807328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.807342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.820324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.820338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:43.944 [2024-11-06 13:30:25.834493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:43.944 [2024-11-06 13:30:25.834507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.847454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.847469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.860295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.860310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.874245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.874260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.887358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.887373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.900287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.900302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.914480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.914495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.927660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.927674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.942643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.942657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.955790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.955804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.970325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.970339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.983438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.983452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:25.996437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:25.996451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:26.010763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:26.010778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:26.023702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:26.023716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:26.038468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:26.038483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:26.051605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:26.051620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:26.064340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:26.064355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:26.078735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:26.078755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.205 [2024-11-06 13:30:26.091675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.205 [2024-11-06 13:30:26.091690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.106462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.106478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.119976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.119990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.134428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.134443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.147489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.147509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.160335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.160349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.174353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.174368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.187408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.187423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.200250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.200264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.214440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.214455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.227306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.227320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.240433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.240447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.254788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.254803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.267805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.267818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 19080.33 IOPS, 149.07 MiB/s [2024-11-06T12:30:26.369Z] [2024-11-06 13:30:26.282825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.282840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.296188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.296202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.310124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.310139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.322997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.323011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.335822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.335836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.350182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.350196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.467 [2024-11-06 13:30:26.363325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.467 [2024-11-06 13:30:26.363339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.376385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.376400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.390443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.390458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.403312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.403331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.416393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.416408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.430523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.430538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.443589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.443604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.456469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.456483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.470481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.470496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.483535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.483549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.496500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.496514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.510496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.510511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.523586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.523600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.536007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.536020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.550341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.550356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.563359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.563373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.576030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.576044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.590714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.590728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.603776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.603790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.728 [2024-11-06 13:30:26.618380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.728 [2024-11-06 13:30:26.618394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.989 [2024-11-06 13:30:26.631217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.631231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.644340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.644355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.658638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.658656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.671687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.671700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.686194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.686208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.699253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.699268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.711929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.711943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.726280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.726295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.739472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.739487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.752212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.752226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.766199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.766212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.779535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.779550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.792085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.792099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.806440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.806454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.819699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.819713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.834306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.834320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.847291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.847307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.860120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.860134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.874861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.874875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.990 [2024-11-06 13:30:26.887871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:44.990 [2024-11-06 13:30:26.887885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:26.902294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:26.902309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:26.915855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:26.915869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:26.930138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:26.930152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:26.942927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:26.942941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:26.956466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:26.956480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:26.970456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:26.970470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:26.983477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:26.983492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:26.995848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:26.995861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:27.010514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:27.010529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:27.023658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:27.023671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:27.038385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:27.038399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:27.051023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:27.051038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:27.063715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:27.063729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:27.078184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.251 [2024-11-06 13:30:27.078198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.251 [2024-11-06 13:30:27.091457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.252 [2024-11-06 13:30:27.091472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.252 [2024-11-06 13:30:27.104695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.252 [2024-11-06 13:30:27.104709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.252 [2024-11-06 13:30:27.118829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.252 [2024-11-06 13:30:27.118843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.252 [2024-11-06 13:30:27.131793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.252 [2024-11-06 13:30:27.131814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.252 [2024-11-06 13:30:27.146227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.252 [2024-11-06 13:30:27.146241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.159357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.159372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.172390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.172404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.186427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.186441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.199565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.199579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.212577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.212591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.226299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.226313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.239676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.239690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.254325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.254340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.267289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.267304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 19085.00 IOPS, 149.10 MiB/s [2024-11-06T12:30:27.415Z] [2024-11-06 13:30:27.280001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.280015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.294424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.294438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.307667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.307681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.322470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.322485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.335591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.335605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.348643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.348658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.362554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.362569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.375644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.375659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.390677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.390693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.513 [2024-11-06 13:30:27.403811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.513 [2024-11-06 13:30:27.403826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.418373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.418388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.431129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.431143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.444464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.444478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.458527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.458542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.471696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.471709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.486560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.486575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.499705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.499719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.514299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.514313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.527371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.527386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.540057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.540071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.554348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.554363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.567303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.567317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.580413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.580428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.594528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.594542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.607461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.775 [2024-11-06 13:30:27.607476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.775 [2024-11-06 13:30:27.620393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.776 [2024-11-06 13:30:27.620408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.776 [2024-11-06 13:30:27.634446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.776 [2024-11-06 13:30:27.634461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.776 [2024-11-06 13:30:27.647439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.776 [2024-11-06 13:30:27.647453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:45.776 [2024-11-06 13:30:27.660172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:45.776 [2024-11-06 13:30:27.660185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.675021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.675044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.688166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.688180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.702615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.702630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.715543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.715557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.728399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.728414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.742469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.742484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.755514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.755529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.768377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.768391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.782410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.782425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.795075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.795090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.807857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.807871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.822651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.822665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.835839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.835853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.850655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.850670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.863487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.863502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.876168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.876182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.890783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.890798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.904036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.904051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.918349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.918364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.038 [2024-11-06 13:30:27.931238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.038 [2024-11-06 13:30:27.931257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:27.943869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:27.943883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:27.958154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:27.958169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:27.971389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:27.971405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:27.984295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:27.984310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:27.998577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:27.998593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:28.011584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:28.011599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:28.024478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:28.024493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:28.038302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:28.038318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:28.051402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:28.051417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:28.064985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:28.064999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:28.078562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:28.078576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.300 [2024-11-06 13:30:28.091786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.300 [2024-11-06 13:30:28.091799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.301 [2024-11-06 13:30:28.106450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.301 [2024-11-06 13:30:28.106465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.301 [2024-11-06 13:30:28.119700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.301 [2024-11-06 13:30:28.119713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.301 [2024-11-06 13:30:28.134557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.301 [2024-11-06 13:30:28.134572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.301 [2024-11-06 13:30:28.147552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.301 [2024-11-06 13:30:28.147567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.301 [2024-11-06 13:30:28.160387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.301 [2024-11-06 13:30:28.160402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.301 [2024-11-06 13:30:28.174344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.301 [2024-11-06 13:30:28.174358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.301 [2024-11-06 13:30:28.187230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.301 [2024-11-06 13:30:28.187248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.562 [2024-11-06 13:30:28.200174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.200188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.214618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.214632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.227312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.227326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.239914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.239927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.254582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.254596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.267508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.267522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 19081.00 IOPS, 149.07 MiB/s [2024-11-06T12:30:28.465Z] [2024-11-06 13:30:28.279415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.279428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 00:33:46.563 Latency(us) 00:33:46.563 [2024-11-06T12:30:28.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.563 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:46.563 Nvme1n1 : 5.01 19080.59 149.07 0.00 0.00 6702.17 2648.75 11195.73 00:33:46.563 [2024-11-06T12:30:28.465Z] =================================================================================================================== 00:33:46.563 [2024-11-06T12:30:28.465Z] Total : 19080.59 149.07 0.00 0.00 6702.17 2648.75 11195.73 00:33:46.563 [2024-11-06 13:30:28.291372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.291386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.303357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.303369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.315352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.315365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.327351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.327362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.339347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.339356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.351345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.351355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.363347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.363357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 [2024-11-06 13:30:28.375346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:46.563 [2024-11-06 13:30:28.375354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:46.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1987966) - No such process 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1987966 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.563 delay0 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.563 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:46.824 [2024-11-06 13:30:28.506135] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:54.964 Initializing NVMe Controllers 00:33:54.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:54.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:54.964 Initialization complete. Launching workers. 00:33:54.964 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5592 00:33:54.964 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5873, failed to submit 39 00:33:54.964 success 5683, unsuccessful 190, failed 0 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:54.964 rmmod nvme_tcp 00:33:54.964 rmmod nvme_fabrics 00:33:54.964 rmmod nvme_keyring 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1985537 ']' 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1985537 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1985537 ']' 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1985537 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1985537 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1985537' 00:33:54.964 killing process with pid 1985537 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1985537 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1985537 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.964 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.906 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:55.906 00:33:55.906 real 0m33.983s 00:33:55.906 user 0m43.460s 00:33:55.906 sys 0m12.252s 00:33:55.906 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:55.906 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:55.906 ************************************ 00:33:55.906 END TEST nvmf_zcopy 00:33:55.906 ************************************ 00:33:55.906 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:55.906 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:55.906 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:55.906 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:56.168 ************************************ 00:33:56.168 START TEST nvmf_nmic 00:33:56.168 ************************************ 00:33:56.168 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:56.168 * Looking for test storage... 00:33:56.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:56.168 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:56.168 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:56.168 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:56.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.168 --rc genhtml_branch_coverage=1 00:33:56.168 --rc genhtml_function_coverage=1 00:33:56.168 --rc genhtml_legend=1 00:33:56.168 --rc geninfo_all_blocks=1 00:33:56.168 --rc geninfo_unexecuted_blocks=1 00:33:56.168 00:33:56.168 ' 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:56.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.168 --rc genhtml_branch_coverage=1 00:33:56.168 --rc genhtml_function_coverage=1 00:33:56.168 --rc genhtml_legend=1 00:33:56.168 --rc geninfo_all_blocks=1 00:33:56.168 --rc geninfo_unexecuted_blocks=1 00:33:56.168 00:33:56.168 ' 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:56.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.168 --rc genhtml_branch_coverage=1 00:33:56.168 --rc genhtml_function_coverage=1 00:33:56.168 --rc genhtml_legend=1 00:33:56.168 --rc geninfo_all_blocks=1 00:33:56.168 --rc geninfo_unexecuted_blocks=1 00:33:56.168 00:33:56.168 ' 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:56.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.168 --rc genhtml_branch_coverage=1 00:33:56.168 --rc genhtml_function_coverage=1 00:33:56.168 --rc genhtml_legend=1 00:33:56.168 --rc geninfo_all_blocks=1 00:33:56.168 --rc geninfo_unexecuted_blocks=1 00:33:56.168 00:33:56.168 ' 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.168 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.169 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.430 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.431 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.431 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.431 13:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:04.578 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:04.578 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:04.578 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:04.579 Found net devices under 0000:31:00.0: cvl_0_0 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:04.579 Found net devices under 0000:31:00.1: cvl_0_1 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:04.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:34:04.579 00:34:04.579 --- 10.0.0.2 ping statistics --- 00:34:04.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.579 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:04.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:34:04.579 00:34:04.579 --- 10.0.0.1 ping statistics --- 00:34:04.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.579 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1994491 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1994491 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1994491 ']' 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:04.579 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.579 [2024-11-06 13:30:45.793920] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:04.579 [2024-11-06 13:30:45.795086] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:34:04.579 [2024-11-06 13:30:45.795139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.579 [2024-11-06 13:30:45.896297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:04.579 [2024-11-06 13:30:45.951731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.579 [2024-11-06 13:30:45.951799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.579 [2024-11-06 13:30:45.951808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.579 [2024-11-06 13:30:45.951816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.579 [2024-11-06 13:30:45.951822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.579 [2024-11-06 13:30:45.954216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.579 [2024-11-06 13:30:45.954377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:04.579 [2024-11-06 13:30:45.954537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.579 [2024-11-06 13:30:45.954537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:04.579 [2024-11-06 13:30:46.031773] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:04.579 [2024-11-06 13:30:46.032251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:04.579 [2024-11-06 13:30:46.032903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:04.579 [2024-11-06 13:30:46.033364] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:04.579 [2024-11-06 13:30:46.033427] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.841 [2024-11-06 13:30:46.659418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.841 Malloc0 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.841 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.103 [2024-11-06 13:30:46.751773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:05.103 test case1: single bdev can't be used in multiple subsystems 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.103 [2024-11-06 13:30:46.787034] bdev.c:8189:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:05.103 [2024-11-06 13:30:46.787059] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:05.103 [2024-11-06 13:30:46.787068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.103 request: 00:34:05.103 { 00:34:05.103 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:05.103 "namespace": { 00:34:05.103 "bdev_name": "Malloc0", 00:34:05.103 "no_auto_visible": false 00:34:05.103 }, 00:34:05.103 "method": "nvmf_subsystem_add_ns", 00:34:05.103 "req_id": 1 00:34:05.103 } 00:34:05.103 Got JSON-RPC error response 00:34:05.103 response: 00:34:05.103 { 00:34:05.103 "code": -32602, 00:34:05.103 "message": "Invalid parameters" 00:34:05.103 } 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:05.103 Adding namespace failed - expected result. 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:05.103 test case2: host connect to nvmf target in multiple paths 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:05.103 [2024-11-06 13:30:46.799188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.103 13:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:05.365 13:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:05.979 13:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:05.979 13:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:34:05.979 13:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:05.979 13:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:05.979 13:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:34:08.092 13:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:08.092 13:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:08.092 13:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:08.092 13:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:08.092 13:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:08.092 13:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:34:08.092 13:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:08.092 [global] 00:34:08.092 thread=1 00:34:08.092 invalidate=1 00:34:08.092 rw=write 00:34:08.092 time_based=1 00:34:08.092 runtime=1 00:34:08.092 ioengine=libaio 00:34:08.092 direct=1 00:34:08.092 bs=4096 00:34:08.092 iodepth=1 00:34:08.092 norandommap=0 00:34:08.092 numjobs=1 00:34:08.092 00:34:08.092 verify_dump=1 00:34:08.092 verify_backlog=512 00:34:08.092 verify_state_save=0 00:34:08.092 do_verify=1 00:34:08.092 verify=crc32c-intel 00:34:08.092 [job0] 00:34:08.092 filename=/dev/nvme0n1 00:34:08.093 Could not set queue depth (nvme0n1) 00:34:08.356 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.356 fio-3.35 00:34:08.356 Starting 1 thread 00:34:09.736 00:34:09.736 job0: (groupid=0, jobs=1): err= 0: pid=1995550: Wed Nov 6 13:30:51 2024 00:34:09.736 read: IOPS=16, BW=67.4KiB/s (69.0kB/s)(68.0KiB/1009msec) 00:34:09.736 slat (nsec): min=25865, max=29827, avg=26468.65, stdev=926.89 00:34:09.736 clat (usec): min=1058, max=42088, avg=37135.89, stdev=13557.57 00:34:09.736 lat (usec): min=1088, max=42115, avg=37162.36, stdev=13557.04 00:34:09.736 clat percentiles (usec): 00:34:09.736 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[ 1172], 20.00th=[41681], 00:34:09.736 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:09.736 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:09.736 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:09.736 | 99.99th=[42206] 00:34:09.736 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:34:09.736 slat (usec): min=10, max=30029, avg=90.31, stdev=1325.76 00:34:09.736 clat (usec): min=222, max=895, avg=637.74, stdev=116.43 00:34:09.736 lat (usec): min=256, max=30595, avg=728.05, stdev=1327.93 00:34:09.736 clat percentiles (usec): 00:34:09.736 | 1.00th=[ 322], 5.00th=[ 420], 10.00th=[ 482], 20.00th=[ 529], 00:34:09.736 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 685], 00:34:09.736 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 791], 00:34:09.736 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 898], 99.95th=[ 898], 00:34:09.736 | 99.99th=[ 898] 00:34:09.736 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:09.736 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:09.736 lat (usec) : 250=0.38%, 500=12.48%, 750=67.49%, 1000=16.45% 00:34:09.736 lat (msec) : 2=0.38%, 50=2.84% 00:34:09.736 cpu : usr=0.99%, sys=1.39%, ctx=532, majf=0, minf=1 00:34:09.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.736 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:09.736 00:34:09.736 Run status group 0 (all jobs): 00:34:09.736 READ: bw=67.4KiB/s (69.0kB/s), 67.4KiB/s-67.4KiB/s (69.0kB/s-69.0kB/s), io=68.0KiB (69.6kB), run=1009-1009msec 00:34:09.736 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:34:09.736 00:34:09.736 Disk stats (read/write): 00:34:09.736 nvme0n1: ios=65/512, merge=0/0, ticks=1303/322, in_queue=1625, util=98.90% 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:09.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.736 rmmod nvme_tcp 00:34:09.736 rmmod nvme_fabrics 00:34:09.736 rmmod nvme_keyring 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1994491 ']' 00:34:09.736 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1994491 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1994491 ']' 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1994491 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1994491 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1994491' 00:34:09.737 killing process with pid 1994491 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1994491 00:34:09.737 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1994491 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.997 13:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.903 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:11.903 00:34:11.903 real 0m15.949s 00:34:11.903 user 0m33.223s 00:34:11.903 sys 0m7.401s 00:34:11.903 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:11.903 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:11.903 ************************************ 00:34:11.903 END TEST nvmf_nmic 00:34:11.903 ************************************ 00:34:12.164 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:12.164 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:12.164 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:12.164 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:12.164 ************************************ 00:34:12.164 START TEST nvmf_fio_target 00:34:12.164 ************************************ 00:34:12.164 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:12.164 * Looking for test storage... 00:34:12.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:12.164 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:12.164 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:12.164 13:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:12.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.164 --rc genhtml_branch_coverage=1 00:34:12.164 --rc genhtml_function_coverage=1 00:34:12.164 --rc genhtml_legend=1 00:34:12.164 --rc geninfo_all_blocks=1 00:34:12.164 --rc geninfo_unexecuted_blocks=1 00:34:12.164 00:34:12.164 ' 00:34:12.164 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:12.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.164 --rc genhtml_branch_coverage=1 00:34:12.164 --rc genhtml_function_coverage=1 00:34:12.164 --rc genhtml_legend=1 00:34:12.164 --rc geninfo_all_blocks=1 00:34:12.165 --rc geninfo_unexecuted_blocks=1 00:34:12.165 00:34:12.165 ' 00:34:12.165 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:12.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.165 --rc genhtml_branch_coverage=1 00:34:12.165 --rc genhtml_function_coverage=1 00:34:12.165 --rc genhtml_legend=1 00:34:12.165 --rc geninfo_all_blocks=1 00:34:12.165 --rc geninfo_unexecuted_blocks=1 00:34:12.165 00:34:12.165 ' 00:34:12.165 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:12.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.165 --rc genhtml_branch_coverage=1 00:34:12.165 --rc genhtml_function_coverage=1 00:34:12.165 --rc genhtml_legend=1 00:34:12.165 --rc geninfo_all_blocks=1 00:34:12.165 --rc geninfo_unexecuted_blocks=1 00:34:12.165 00:34:12.165 ' 00:34:12.165 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.425 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.426 13:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:20.568 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:20.568 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:20.568 Found net devices under 0000:31:00.0: cvl_0_0 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:20.568 Found net devices under 0000:31:00.1: cvl_0_1 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.568 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:20.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:34:20.569 00:34:20.569 --- 10.0.0.2 ping statistics --- 00:34:20.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.569 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:34:20.569 00:34:20.569 --- 10.0.0.1 ping statistics --- 00:34:20.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.569 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1999920 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1999920 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1999920 ']' 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:20.569 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.569 [2024-11-06 13:31:01.733643] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:20.569 [2024-11-06 13:31:01.734813] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:34:20.569 [2024-11-06 13:31:01.734862] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.569 [2024-11-06 13:31:01.834393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.569 [2024-11-06 13:31:01.886848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.569 [2024-11-06 13:31:01.886899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.569 [2024-11-06 13:31:01.886908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.569 [2024-11-06 13:31:01.886915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.569 [2024-11-06 13:31:01.886922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.569 [2024-11-06 13:31:01.889318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.569 [2024-11-06 13:31:01.889465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:20.569 [2024-11-06 13:31:01.889624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.569 [2024-11-06 13:31:01.889625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:20.569 [2024-11-06 13:31:01.967032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:20.569 [2024-11-06 13:31:01.968112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:20.569 [2024-11-06 13:31:01.968324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:20.569 [2024-11-06 13:31:01.968774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:20.569 [2024-11-06 13:31:01.968867] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:20.831 13:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:20.831 13:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:34:20.831 13:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:20.831 13:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:20.831 13:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.831 13:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.831 13:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:21.093 [2024-11-06 13:31:02.754516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.093 13:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.354 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:21.354 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.354 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:21.354 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.614 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:21.614 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.875 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:21.875 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:22.135 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.135 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:22.135 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.395 13:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:22.395 13:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:22.655 13:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:22.655 13:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:22.655 13:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:22.915 13:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:22.915 13:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.176 13:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:23.176 13:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:23.176 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.437 [2024-11-06 13:31:05.178470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.437 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:23.698 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:23.959 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:24.219 13:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:24.219 13:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:34:24.219 13:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:24.219 13:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:34:24.220 13:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:34:24.220 13:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:26.762 13:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:26.762 13:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:26.762 13:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:26.762 13:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:26.762 13:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:26.762 13:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:26.762 13:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:26.762 [global] 00:34:26.762 thread=1 00:34:26.762 invalidate=1 00:34:26.762 rw=write 00:34:26.762 time_based=1 00:34:26.762 runtime=1 00:34:26.762 ioengine=libaio 00:34:26.762 direct=1 00:34:26.762 bs=4096 00:34:26.762 iodepth=1 00:34:26.762 norandommap=0 00:34:26.762 numjobs=1 00:34:26.762 00:34:26.762 verify_dump=1 00:34:26.763 verify_backlog=512 00:34:26.763 verify_state_save=0 00:34:26.763 do_verify=1 00:34:26.763 verify=crc32c-intel 00:34:26.763 [job0] 00:34:26.763 filename=/dev/nvme0n1 00:34:26.763 [job1] 00:34:26.763 filename=/dev/nvme0n2 00:34:26.763 [job2] 00:34:26.763 filename=/dev/nvme0n3 00:34:26.763 [job3] 00:34:26.763 filename=/dev/nvme0n4 00:34:26.763 Could not set queue depth (nvme0n1) 00:34:26.763 Could not set queue depth (nvme0n2) 00:34:26.763 Could not set queue depth (nvme0n3) 00:34:26.763 Could not set queue depth (nvme0n4) 00:34:26.763 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.763 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.763 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.763 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:26.763 fio-3.35 00:34:26.763 Starting 4 threads 00:34:28.169 00:34:28.169 job0: (groupid=0, jobs=1): err= 0: pid=2001502: Wed Nov 6 13:31:09 2024 00:34:28.169 read: IOPS=666, BW=2665KiB/s (2729kB/s)(2668KiB/1001msec) 00:34:28.169 slat (nsec): min=6851, max=40985, avg=22775.95, stdev=7306.26 00:34:28.169 clat (usec): min=195, max=1079, avg=770.07, stdev=74.68 00:34:28.169 lat (usec): min=203, max=1105, avg=792.85, stdev=76.57 00:34:28.169 clat percentiles (usec): 00:34:28.169 | 1.00th=[ 529], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 725], 00:34:28.169 | 30.00th=[ 766], 40.00th=[ 775], 50.00th=[ 783], 60.00th=[ 791], 00:34:28.169 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 865], 00:34:28.169 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 1074], 99.95th=[ 1074], 00:34:28.169 | 99.99th=[ 1074] 00:34:28.169 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:28.169 slat (nsec): min=9773, max=65186, avg=27540.07, stdev=11298.68 00:34:28.169 clat (usec): min=110, max=927, avg=420.87, stdev=81.18 00:34:28.169 lat (usec): min=121, max=939, avg=448.41, stdev=87.21 00:34:28.169 clat percentiles (usec): 00:34:28.169 | 1.00th=[ 231], 5.00th=[ 281], 10.00th=[ 310], 20.00th=[ 347], 00:34:28.169 | 30.00th=[ 379], 40.00th=[ 424], 50.00th=[ 441], 60.00th=[ 453], 00:34:28.169 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 506], 95.00th=[ 523], 00:34:28.169 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 906], 99.95th=[ 930], 00:34:28.169 | 99.99th=[ 930] 00:34:28.169 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.169 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.169 lat (usec) : 250=1.18%, 500=52.99%, 750=16.20%, 1000=29.57% 00:34:28.169 lat (msec) : 2=0.06% 00:34:28.169 cpu : usr=2.50%, sys=4.20%, ctx=1693, majf=0, minf=1 00:34:28.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.169 issued rwts: total=667,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.169 job1: (groupid=0, jobs=1): err= 0: pid=2001503: Wed Nov 6 13:31:09 2024 00:34:28.169 read: IOPS=18, BW=75.6KiB/s (77.4kB/s)(76.0KiB/1005msec) 00:34:28.169 slat (nsec): min=26710, max=27329, avg=26982.68, stdev=151.01 00:34:28.169 clat (usec): min=40795, max=41080, avg=40961.31, stdev=66.67 00:34:28.169 lat (usec): min=40822, max=41107, avg=40988.29, stdev=66.72 00:34:28.169 clat percentiles (usec): 00:34:28.169 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:28.169 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:28.169 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:28.169 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:28.169 | 99.99th=[41157] 00:34:28.169 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:34:28.169 slat (usec): min=9, max=2774, avg=28.81, stdev=122.27 00:34:28.169 clat (usec): min=148, max=609, avg=404.87, stdev=76.19 00:34:28.169 lat (usec): min=160, max=3232, avg=433.68, stdev=150.08 00:34:28.169 clat percentiles (usec): 00:34:28.169 | 1.00th=[ 258], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 343], 00:34:28.169 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 392], 60.00th=[ 437], 00:34:28.169 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 523], 00:34:28.169 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 611], 99.95th=[ 611], 00:34:28.169 | 99.99th=[ 611] 00:34:28.169 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.169 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.169 lat (usec) : 250=0.94%, 500=86.25%, 750=9.23% 00:34:28.169 lat (msec) : 50=3.58% 00:34:28.169 cpu : usr=0.70%, sys=1.10%, ctx=534, majf=0, minf=1 00:34:28.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.169 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.169 job2: (groupid=0, jobs=1): err= 0: pid=2001504: Wed Nov 6 13:31:09 2024 00:34:28.169 read: IOPS=17, BW=69.4KiB/s (71.0kB/s)(72.0KiB/1038msec) 00:34:28.169 slat (nsec): min=12931, max=30787, avg=26281.61, stdev=3504.50 00:34:28.169 clat (usec): min=483, max=42262, avg=37385.11, stdev=13308.43 00:34:28.169 lat (usec): min=514, max=42275, avg=37411.39, stdev=13307.51 00:34:28.169 clat percentiles (usec): 00:34:28.170 | 1.00th=[ 482], 5.00th=[ 482], 10.00th=[ 1123], 20.00th=[41681], 00:34:28.170 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:28.170 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:28.170 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.170 | 99.99th=[42206] 00:34:28.170 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:34:28.170 slat (nsec): min=10645, max=54210, avg=34383.95, stdev=6740.22 00:34:28.170 clat (usec): min=264, max=1137, avg=666.47, stdev=147.42 00:34:28.170 lat (usec): min=277, max=1173, avg=700.85, stdev=148.71 00:34:28.170 clat percentiles (usec): 00:34:28.170 | 1.00th=[ 359], 5.00th=[ 437], 10.00th=[ 478], 20.00th=[ 529], 00:34:28.170 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 709], 00:34:28.170 | 70.00th=[ 742], 80.00th=[ 799], 90.00th=[ 865], 95.00th=[ 922], 00:34:28.170 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1139], 99.95th=[ 1139], 00:34:28.170 | 99.99th=[ 1139] 00:34:28.170 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.170 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.170 lat (usec) : 500=14.15%, 750=54.53%, 1000=27.17% 00:34:28.170 lat (msec) : 2=1.13%, 50=3.02% 00:34:28.170 cpu : usr=1.06%, sys=1.35%, ctx=531, majf=0, minf=1 00:34:28.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.170 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.170 job3: (groupid=0, jobs=1): err= 0: pid=2001505: Wed Nov 6 13:31:09 2024 00:34:28.170 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1020msec) 00:34:28.170 slat (nsec): min=26096, max=30515, avg=26788.82, stdev=1023.47 00:34:28.170 clat (usec): min=1325, max=42109, avg=39425.48, stdev=9823.61 00:34:28.170 lat (usec): min=1352, max=42136, avg=39452.27, stdev=9823.61 00:34:28.170 clat percentiles (usec): 00:34:28.170 | 1.00th=[ 1319], 5.00th=[ 1319], 10.00th=[41157], 20.00th=[41157], 00:34:28.170 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:28.170 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:28.170 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:28.170 | 99.99th=[42206] 00:34:28.170 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:34:28.170 slat (nsec): min=10249, max=64862, avg=31859.20, stdev=9185.95 00:34:28.170 clat (usec): min=294, max=1030, avg=640.58, stdev=130.39 00:34:28.170 lat (usec): min=306, max=1065, avg=672.44, stdev=134.22 00:34:28.170 clat percentiles (usec): 00:34:28.170 | 1.00th=[ 338], 5.00th=[ 404], 10.00th=[ 457], 20.00th=[ 529], 00:34:28.170 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 676], 00:34:28.170 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 799], 95.00th=[ 848], 00:34:28.170 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1029], 99.95th=[ 1029], 00:34:28.170 | 99.99th=[ 1029] 00:34:28.170 bw ( KiB/s): min= 4096, max= 4096, per=41.52%, avg=4096.00, stdev= 0.00, samples=1 00:34:28.170 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:28.170 lat (usec) : 500=14.37%, 750=61.81%, 1000=20.42% 00:34:28.170 lat (msec) : 2=0.38%, 50=3.02% 00:34:28.170 cpu : usr=0.98%, sys=1.28%, ctx=530, majf=0, minf=1 00:34:28.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:28.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:28.170 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:28.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:28.170 00:34:28.170 Run status group 0 (all jobs): 00:34:28.170 READ: bw=2778KiB/s (2845kB/s), 66.7KiB/s-2665KiB/s (68.3kB/s-2729kB/s), io=2884KiB (2953kB), run=1001-1038msec 00:34:28.170 WRITE: bw=9865KiB/s (10.1MB/s), 1973KiB/s-4092KiB/s (2020kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1038msec 00:34:28.170 00:34:28.170 Disk stats (read/write): 00:34:28.170 nvme0n1: ios=534/977, merge=0/0, ticks=1241/399, in_queue=1640, util=85.97% 00:34:28.170 nvme0n2: ios=70/512, merge=0/0, ticks=767/196, in_queue=963, util=90.86% 00:34:28.170 nvme0n3: ios=70/512, merge=0/0, ticks=1209/326, in_queue=1535, util=93.46% 00:34:28.170 nvme0n4: ios=70/512, merge=0/0, ticks=1266/323, in_queue=1589, util=94.04% 00:34:28.170 13:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:28.170 [global] 00:34:28.170 thread=1 00:34:28.170 invalidate=1 00:34:28.170 rw=randwrite 00:34:28.170 time_based=1 00:34:28.170 runtime=1 00:34:28.170 ioengine=libaio 00:34:28.170 direct=1 00:34:28.170 bs=4096 00:34:28.170 iodepth=1 00:34:28.170 norandommap=0 00:34:28.170 numjobs=1 00:34:28.170 00:34:28.170 verify_dump=1 00:34:28.170 verify_backlog=512 00:34:28.170 verify_state_save=0 00:34:28.170 do_verify=1 00:34:28.170 verify=crc32c-intel 00:34:28.170 [job0] 00:34:28.170 filename=/dev/nvme0n1 00:34:28.170 [job1] 00:34:28.170 filename=/dev/nvme0n2 00:34:28.170 [job2] 00:34:28.170 filename=/dev/nvme0n3 00:34:28.170 [job3] 00:34:28.170 filename=/dev/nvme0n4 00:34:28.170 Could not set queue depth (nvme0n1) 00:34:28.170 Could not set queue depth (nvme0n2) 00:34:28.170 Could not set queue depth (nvme0n3) 00:34:28.170 Could not set queue depth (nvme0n4) 00:34:28.438 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.438 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.438 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.438 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.438 fio-3.35 00:34:28.438 Starting 4 threads 00:34:29.842 00:34:29.842 job0: (groupid=0, jobs=1): err= 0: pid=2002023: Wed Nov 6 13:31:11 2024 00:34:29.842 read: IOPS=18, BW=74.6KiB/s (76.4kB/s)(76.0KiB/1019msec) 00:34:29.842 slat (nsec): min=27008, max=28193, avg=27451.63, stdev=347.61 00:34:29.842 clat (usec): min=963, max=42082, avg=39643.62, stdev=9374.39 00:34:29.842 lat (usec): min=990, max=42109, avg=39671.07, stdev=9374.49 00:34:29.842 clat percentiles (usec): 00:34:29.842 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[41157], 20.00th=[41157], 00:34:29.842 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:29.842 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:29.842 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:29.842 | 99.99th=[42206] 00:34:29.842 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:34:29.842 slat (nsec): min=3227, max=53084, avg=22302.07, stdev=11355.75 00:34:29.842 clat (usec): min=116, max=921, avg=490.02, stdev=139.79 00:34:29.842 lat (usec): min=126, max=935, avg=512.32, stdev=142.63 00:34:29.842 clat percentiles (usec): 00:34:29.842 | 1.00th=[ 161], 5.00th=[ 239], 10.00th=[ 285], 20.00th=[ 375], 00:34:29.842 | 30.00th=[ 416], 40.00th=[ 469], 50.00th=[ 498], 60.00th=[ 529], 00:34:29.842 | 70.00th=[ 562], 80.00th=[ 619], 90.00th=[ 676], 95.00th=[ 701], 00:34:29.842 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 922], 99.95th=[ 922], 00:34:29.842 | 99.99th=[ 922] 00:34:29.842 bw ( KiB/s): min= 4096, max= 4096, per=45.09%, avg=4096.00, stdev= 0.00, samples=1 00:34:29.842 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:29.842 lat (usec) : 250=5.65%, 500=43.13%, 750=45.95%, 1000=1.88% 00:34:29.842 lat (msec) : 50=3.39% 00:34:29.842 cpu : usr=0.59%, sys=1.77%, ctx=534, majf=0, minf=1 00:34:29.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.842 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.842 job1: (groupid=0, jobs=1): err= 0: pid=2002024: Wed Nov 6 13:31:11 2024 00:34:29.842 read: IOPS=346, BW=1385KiB/s (1418kB/s)(1388KiB/1002msec) 00:34:29.842 slat (nsec): min=7244, max=44452, avg=26503.10, stdev=4097.77 00:34:29.842 clat (usec): min=427, max=41086, avg=1961.09, stdev=6216.41 00:34:29.842 lat (usec): min=436, max=41113, avg=1987.59, stdev=6216.27 00:34:29.842 clat percentiles (usec): 00:34:29.842 | 1.00th=[ 545], 5.00th=[ 644], 10.00th=[ 701], 20.00th=[ 791], 00:34:29.842 | 30.00th=[ 873], 40.00th=[ 938], 50.00th=[ 979], 60.00th=[ 1029], 00:34:29.842 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1188], 95.00th=[ 1254], 00:34:29.842 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:29.842 | 99.99th=[41157] 00:34:29.842 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:34:29.842 slat (nsec): min=9985, max=56105, avg=32068.67, stdev=7115.23 00:34:29.842 clat (usec): min=135, max=1121, avg=562.08, stdev=183.69 00:34:29.842 lat (usec): min=147, max=1155, avg=594.15, stdev=184.70 00:34:29.842 clat percentiles (usec): 00:34:29.842 | 1.00th=[ 249], 5.00th=[ 310], 10.00th=[ 363], 20.00th=[ 396], 00:34:29.842 | 30.00th=[ 424], 40.00th=[ 469], 50.00th=[ 553], 60.00th=[ 611], 00:34:29.842 | 70.00th=[ 644], 80.00th=[ 709], 90.00th=[ 832], 95.00th=[ 914], 00:34:29.842 | 99.00th=[ 996], 99.50th=[ 1037], 99.90th=[ 1123], 99.95th=[ 1123], 00:34:29.842 | 99.99th=[ 1123] 00:34:29.842 bw ( KiB/s): min= 4104, max= 4104, per=45.18%, avg=4104.00, stdev= 0.00, samples=1 00:34:29.842 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:34:29.842 lat (usec) : 250=0.70%, 500=25.38%, 750=30.03%, 1000=24.33% 00:34:29.842 lat (msec) : 2=18.51%, 50=1.05% 00:34:29.842 cpu : usr=1.50%, sys=2.50%, ctx=860, majf=0, minf=1 00:34:29.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.842 issued rwts: total=347,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.842 job2: (groupid=0, jobs=1): err= 0: pid=2002025: Wed Nov 6 13:31:11 2024 00:34:29.842 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:29.842 slat (nsec): min=7307, max=62571, avg=27978.14, stdev=4648.40 00:34:29.842 clat (usec): min=513, max=1239, avg=923.52, stdev=135.25 00:34:29.842 lat (usec): min=541, max=1266, avg=951.50, stdev=135.54 00:34:29.842 clat percentiles (usec): 00:34:29.842 | 1.00th=[ 570], 5.00th=[ 668], 10.00th=[ 750], 20.00th=[ 816], 00:34:29.842 | 30.00th=[ 865], 40.00th=[ 906], 50.00th=[ 947], 60.00th=[ 971], 00:34:29.842 | 70.00th=[ 1004], 80.00th=[ 1037], 90.00th=[ 1090], 95.00th=[ 1123], 00:34:29.842 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:34:29.842 | 99.99th=[ 1237] 00:34:29.842 write: IOPS=793, BW=3173KiB/s (3249kB/s)(3176KiB/1001msec); 0 zone resets 00:34:29.842 slat (nsec): min=9462, max=69312, avg=32357.70, stdev=9416.56 00:34:29.842 clat (usec): min=260, max=958, avg=599.68, stdev=130.33 00:34:29.842 lat (usec): min=286, max=993, avg=632.04, stdev=133.29 00:34:29.842 clat percentiles (usec): 00:34:29.842 | 1.00th=[ 281], 5.00th=[ 375], 10.00th=[ 420], 20.00th=[ 486], 00:34:29.842 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:34:29.842 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 799], 00:34:29.842 | 99.00th=[ 898], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:34:29.842 | 99.99th=[ 963] 00:34:29.842 bw ( KiB/s): min= 4096, max= 4096, per=45.09%, avg=4096.00, stdev= 0.00, samples=1 00:34:29.842 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:29.842 lat (usec) : 500=13.94%, 750=44.56%, 1000=29.79% 00:34:29.842 lat (msec) : 2=11.72% 00:34:29.842 cpu : usr=2.30%, sys=5.70%, ctx=1308, majf=0, minf=1 00:34:29.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.842 issued rwts: total=512,794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.842 job3: (groupid=0, jobs=1): err= 0: pid=2002026: Wed Nov 6 13:31:11 2024 00:34:29.842 read: IOPS=12, BW=50.7KiB/s (51.9kB/s)(52.0KiB/1026msec) 00:34:29.842 slat (nsec): min=26858, max=27327, avg=27072.46, stdev=147.72 00:34:29.842 clat (usec): min=41471, max=42028, avg=41903.64, stdev=144.30 00:34:29.842 lat (usec): min=41498, max=42055, avg=41930.72, stdev=144.26 00:34:29.842 clat percentiles (usec): 00:34:29.842 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:34:29.842 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:29.842 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:29.842 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:29.842 | 99.99th=[42206] 00:34:29.842 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:34:29.842 slat (usec): min=10, max=1580, avg=37.44, stdev=68.53 00:34:29.842 clat (usec): min=397, max=1197, avg=893.16, stdev=121.36 00:34:29.842 lat (usec): min=415, max=2570, avg=930.60, stdev=142.70 00:34:29.842 clat percentiles (usec): 00:34:29.842 | 1.00th=[ 537], 5.00th=[ 676], 10.00th=[ 734], 20.00th=[ 807], 00:34:29.842 | 30.00th=[ 848], 40.00th=[ 889], 50.00th=[ 914], 60.00th=[ 938], 00:34:29.842 | 70.00th=[ 963], 80.00th=[ 988], 90.00th=[ 1020], 95.00th=[ 1057], 00:34:29.842 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1205], 99.95th=[ 1205], 00:34:29.842 | 99.99th=[ 1205] 00:34:29.842 bw ( KiB/s): min= 4096, max= 4096, per=45.09%, avg=4096.00, stdev= 0.00, samples=1 00:34:29.842 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:29.842 lat (usec) : 500=0.95%, 750=9.90%, 1000=71.05% 00:34:29.842 lat (msec) : 2=15.62%, 50=2.48% 00:34:29.842 cpu : usr=1.17%, sys=1.37%, ctx=527, majf=0, minf=1 00:34:29.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.842 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:29.842 00:34:29.842 Run status group 0 (all jobs): 00:34:29.842 READ: bw=3474KiB/s (3557kB/s), 50.7KiB/s-2046KiB/s (51.9kB/s-2095kB/s), io=3564KiB (3650kB), run=1001-1026msec 00:34:29.842 WRITE: bw=9084KiB/s (9302kB/s), 1996KiB/s-3173KiB/s (2044kB/s-3249kB/s), io=9320KiB (9544kB), run=1001-1026msec 00:34:29.842 00:34:29.842 Disk stats (read/write): 00:34:29.842 nvme0n1: ios=67/512, merge=0/0, ticks=885/196, in_queue=1081, util=85.07% 00:34:29.842 nvme0n2: ios=390/512, merge=0/0, ticks=861/286, in_queue=1147, util=89.91% 00:34:29.842 nvme0n3: ios=575/539, merge=0/0, ticks=529/245, in_queue=774, util=95.47% 00:34:29.842 nvme0n4: ios=66/512, merge=0/0, ticks=876/434, in_queue=1310, util=94.26% 00:34:29.842 13:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:29.842 [global] 00:34:29.842 thread=1 00:34:29.842 invalidate=1 00:34:29.842 rw=write 00:34:29.842 time_based=1 00:34:29.842 runtime=1 00:34:29.842 ioengine=libaio 00:34:29.842 direct=1 00:34:29.842 bs=4096 00:34:29.842 iodepth=128 00:34:29.842 norandommap=0 00:34:29.842 numjobs=1 00:34:29.842 00:34:29.842 verify_dump=1 00:34:29.842 verify_backlog=512 00:34:29.842 verify_state_save=0 00:34:29.842 do_verify=1 00:34:29.842 verify=crc32c-intel 00:34:29.842 [job0] 00:34:29.842 filename=/dev/nvme0n1 00:34:29.842 [job1] 00:34:29.843 filename=/dev/nvme0n2 00:34:29.843 [job2] 00:34:29.843 filename=/dev/nvme0n3 00:34:29.843 [job3] 00:34:29.843 filename=/dev/nvme0n4 00:34:29.843 Could not set queue depth (nvme0n1) 00:34:29.843 Could not set queue depth (nvme0n2) 00:34:29.843 Could not set queue depth (nvme0n3) 00:34:29.843 Could not set queue depth (nvme0n4) 00:34:30.109 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.109 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.109 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.109 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:30.109 fio-3.35 00:34:30.109 Starting 4 threads 00:34:31.527 00:34:31.527 job0: (groupid=0, jobs=1): err= 0: pid=2002549: Wed Nov 6 13:31:13 2024 00:34:31.527 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:34:31.527 slat (nsec): min=893, max=7583.8k, avg=61607.94, stdev=427753.43 00:34:31.527 clat (usec): min=2926, max=20442, avg=7847.66, stdev=1868.34 00:34:31.527 lat (usec): min=2932, max=20469, avg=7909.27, stdev=1905.63 00:34:31.527 clat percentiles (usec): 00:34:31.527 | 1.00th=[ 5014], 5.00th=[ 5866], 10.00th=[ 6194], 20.00th=[ 6849], 00:34:31.527 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:34:31.527 | 70.00th=[ 7767], 80.00th=[ 8291], 90.00th=[10159], 95.00th=[12256], 00:34:31.527 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15664], 99.95th=[16450], 00:34:31.527 | 99.99th=[20317] 00:34:31.527 write: IOPS=8674, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:34:31.527 slat (nsec): min=1518, max=4098.1k, avg=52816.20, stdev=307967.71 00:34:31.527 clat (usec): min=785, max=17729, avg=7242.00, stdev=1406.90 00:34:31.527 lat (usec): min=1177, max=17731, avg=7294.82, stdev=1420.83 00:34:31.527 clat percentiles (usec): 00:34:31.527 | 1.00th=[ 3490], 5.00th=[ 4752], 10.00th=[ 5932], 20.00th=[ 6783], 00:34:31.527 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7308], 00:34:31.527 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8455], 95.00th=[ 8848], 00:34:31.527 | 99.00th=[13566], 99.50th=[14484], 99.90th=[16057], 99.95th=[16057], 00:34:31.527 | 99.99th=[17695] 00:34:31.527 bw ( KiB/s): min=32768, max=35816, per=31.90%, avg=34292.00, stdev=2155.26, samples=2 00:34:31.527 iops : min= 8192, max= 8954, avg=8573.00, stdev=538.82, samples=2 00:34:31.527 lat (usec) : 1000=0.01% 00:34:31.527 lat (msec) : 2=0.18%, 4=1.01%, 10=92.48%, 20=6.32%, 50=0.01% 00:34:31.527 cpu : usr=5.49%, sys=6.79%, ctx=733, majf=0, minf=1 00:34:31.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:31.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.527 issued rwts: total=8192,8701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.527 job1: (groupid=0, jobs=1): err= 0: pid=2002550: Wed Nov 6 13:31:13 2024 00:34:31.527 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:34:31.527 slat (nsec): min=943, max=9165.2k, avg=91539.46, stdev=596443.91 00:34:31.527 clat (usec): min=3035, max=26759, avg=11881.97, stdev=3525.67 00:34:31.527 lat (usec): min=3040, max=26761, avg=11973.51, stdev=3566.20 00:34:31.527 clat percentiles (usec): 00:34:31.527 | 1.00th=[ 4621], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 8455], 00:34:31.527 | 30.00th=[ 9372], 40.00th=[10683], 50.00th=[12125], 60.00th=[13173], 00:34:31.527 | 70.00th=[13829], 80.00th=[14877], 90.00th=[16450], 95.00th=[17695], 00:34:31.527 | 99.00th=[20055], 99.50th=[20579], 99.90th=[26870], 99.95th=[26870], 00:34:31.527 | 99.99th=[26870] 00:34:31.527 write: IOPS=6082, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1004msec); 0 zone resets 00:34:31.527 slat (nsec): min=1626, max=11268k, avg=73810.54, stdev=449433.88 00:34:31.527 clat (usec): min=883, max=25036, avg=9858.81, stdev=2946.78 00:34:31.527 lat (usec): min=1193, max=25039, avg=9932.62, stdev=2979.13 00:34:31.527 clat percentiles (usec): 00:34:31.527 | 1.00th=[ 3064], 5.00th=[ 5407], 10.00th=[ 6128], 20.00th=[ 7439], 00:34:31.527 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[10028], 60.00th=[10683], 00:34:31.527 | 70.00th=[11338], 80.00th=[12387], 90.00th=[13435], 95.00th=[14484], 00:34:31.527 | 99.00th=[17433], 99.50th=[17695], 99.90th=[23987], 99.95th=[23987], 00:34:31.527 | 99.99th=[25035] 00:34:31.527 bw ( KiB/s): min=19736, max=28096, per=22.25%, avg=23916.00, stdev=5911.41, samples=2 00:34:31.527 iops : min= 4934, max= 7024, avg=5979.00, stdev=1477.85, samples=2 00:34:31.527 lat (usec) : 1000=0.01% 00:34:31.527 lat (msec) : 2=0.18%, 4=0.89%, 10=42.19%, 20=56.24%, 50=0.49% 00:34:31.528 cpu : usr=3.99%, sys=6.28%, ctx=473, majf=0, minf=1 00:34:31.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:31.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.528 issued rwts: total=5632,6107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.528 job2: (groupid=0, jobs=1): err= 0: pid=2002553: Wed Nov 6 13:31:13 2024 00:34:31.528 read: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:34:31.528 slat (nsec): min=981, max=18935k, avg=71677.20, stdev=577873.82 00:34:31.528 clat (usec): min=2993, max=34439, avg=9536.11, stdev=4002.38 00:34:31.528 lat (usec): min=3001, max=34446, avg=9607.78, stdev=4034.80 00:34:31.528 clat percentiles (usec): 00:34:31.528 | 1.00th=[ 4113], 5.00th=[ 5473], 10.00th=[ 6456], 20.00th=[ 7111], 00:34:31.528 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:34:31.528 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[13566], 95.00th=[16188], 00:34:31.528 | 99.00th=[25297], 99.50th=[29230], 99.90th=[34341], 99.95th=[34341], 00:34:31.528 | 99.99th=[34341] 00:34:31.528 write: IOPS=7554, BW=29.5MiB/s (30.9MB/s)(29.7MiB/1005msec); 0 zone resets 00:34:31.528 slat (nsec): min=1666, max=7396.6k, avg=58671.00, stdev=413215.70 00:34:31.528 clat (usec): min=1009, max=25374, avg=7771.88, stdev=2180.34 00:34:31.528 lat (usec): min=1498, max=25376, avg=7830.55, stdev=2190.95 00:34:31.528 clat percentiles (usec): 00:34:31.528 | 1.00th=[ 2933], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 6194], 00:34:31.528 | 30.00th=[ 7177], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7898], 00:34:31.528 | 70.00th=[ 8029], 80.00th=[ 8979], 90.00th=[10552], 95.00th=[11731], 00:34:31.528 | 99.00th=[13304], 99.50th=[18482], 99.90th=[19530], 99.95th=[19530], 00:34:31.528 | 99.99th=[25297] 00:34:31.528 bw ( KiB/s): min=28672, max=31040, per=27.77%, avg=29856.00, stdev=1674.43, samples=2 00:34:31.528 iops : min= 7168, max= 7760, avg=7464.00, stdev=418.61, samples=2 00:34:31.528 lat (msec) : 2=0.09%, 4=1.59%, 10=78.13%, 20=18.25%, 50=1.94% 00:34:31.528 cpu : usr=4.18%, sys=7.77%, ctx=627, majf=0, minf=1 00:34:31.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:31.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.528 issued rwts: total=7168,7592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.528 job3: (groupid=0, jobs=1): err= 0: pid=2002554: Wed Nov 6 13:31:13 2024 00:34:31.528 read: IOPS=4523, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1004msec) 00:34:31.528 slat (nsec): min=946, max=9392.6k, avg=114770.63, stdev=668563.66 00:34:31.528 clat (usec): min=879, max=30739, avg=14528.77, stdev=5608.23 00:34:31.528 lat (usec): min=4346, max=30746, avg=14643.54, stdev=5626.66 00:34:31.528 clat percentiles (usec): 00:34:31.528 | 1.00th=[ 5145], 5.00th=[ 7111], 10.00th=[ 8094], 20.00th=[ 8586], 00:34:31.528 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[15533], 60.00th=[16712], 00:34:31.528 | 70.00th=[18482], 80.00th=[20055], 90.00th=[21627], 95.00th=[23462], 00:34:31.528 | 99.00th=[25560], 99.50th=[26084], 99.90th=[30802], 99.95th=[30802], 00:34:31.528 | 99.99th=[30802] 00:34:31.528 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:34:31.528 slat (nsec): min=1579, max=8522.9k, avg=98631.80, stdev=544739.39 00:34:31.528 clat (usec): min=1237, max=58604, avg=13316.48, stdev=7214.91 00:34:31.528 lat (usec): min=1253, max=58636, avg=13415.11, stdev=7260.37 00:34:31.528 clat percentiles (usec): 00:34:31.528 | 1.00th=[ 4752], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 8586], 00:34:31.528 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[11731], 60.00th=[13960], 00:34:31.528 | 70.00th=[15139], 80.00th=[17171], 90.00th=[20579], 95.00th=[23725], 00:34:31.528 | 99.00th=[45876], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:34:31.528 | 99.99th=[58459] 00:34:31.528 bw ( KiB/s): min=15048, max=21816, per=17.15%, avg=18432.00, stdev=4785.70, samples=2 00:34:31.528 iops : min= 3762, max= 5454, avg=4608.00, stdev=1196.42, samples=2 00:34:31.528 lat (usec) : 1000=0.01% 00:34:31.528 lat (msec) : 2=0.20%, 4=0.01%, 10=40.87%, 20=43.89%, 50=14.68% 00:34:31.528 lat (msec) : 100=0.34% 00:34:31.528 cpu : usr=3.39%, sys=4.59%, ctx=406, majf=0, minf=1 00:34:31.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:31.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:31.528 issued rwts: total=4542,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:31.528 00:34:31.528 Run status group 0 (all jobs): 00:34:31.528 READ: bw=99.2MiB/s (104MB/s), 17.7MiB/s-31.9MiB/s (18.5MB/s-33.5MB/s), io=99.7MiB (105MB), run=1003-1005msec 00:34:31.528 WRITE: bw=105MiB/s (110MB/s), 17.9MiB/s-33.9MiB/s (18.8MB/s-35.5MB/s), io=106MiB (111MB), run=1003-1005msec 00:34:31.528 00:34:31.528 Disk stats (read/write): 00:34:31.528 nvme0n1: ios=6965/7168, merge=0/0, ticks=36217/33021, in_queue=69238, util=96.59% 00:34:31.528 nvme0n2: ios=5122/5120, merge=0/0, ticks=31842/27700, in_queue=59542, util=98.37% 00:34:31.528 nvme0n3: ios=6016/6144, merge=0/0, ticks=51815/43102, in_queue=94917, util=98.22% 00:34:31.528 nvme0n4: ios=3770/4096, merge=0/0, ticks=21076/26060, in_queue=47136, util=88.97% 00:34:31.528 13:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:31.528 [global] 00:34:31.528 thread=1 00:34:31.528 invalidate=1 00:34:31.528 rw=randwrite 00:34:31.528 time_based=1 00:34:31.528 runtime=1 00:34:31.528 ioengine=libaio 00:34:31.528 direct=1 00:34:31.528 bs=4096 00:34:31.528 iodepth=128 00:34:31.528 norandommap=0 00:34:31.528 numjobs=1 00:34:31.528 00:34:31.528 verify_dump=1 00:34:31.528 verify_backlog=512 00:34:31.528 verify_state_save=0 00:34:31.528 do_verify=1 00:34:31.528 verify=crc32c-intel 00:34:31.528 [job0] 00:34:31.528 filename=/dev/nvme0n1 00:34:31.528 [job1] 00:34:31.528 filename=/dev/nvme0n2 00:34:31.528 [job2] 00:34:31.528 filename=/dev/nvme0n3 00:34:31.528 [job3] 00:34:31.528 filename=/dev/nvme0n4 00:34:31.528 Could not set queue depth (nvme0n1) 00:34:31.528 Could not set queue depth (nvme0n2) 00:34:31.528 Could not set queue depth (nvme0n3) 00:34:31.528 Could not set queue depth (nvme0n4) 00:34:31.793 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:31.793 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:31.793 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:31.793 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:31.793 fio-3.35 00:34:31.793 Starting 4 threads 00:34:33.201 00:34:33.201 job0: (groupid=0, jobs=1): err= 0: pid=2003016: Wed Nov 6 13:31:14 2024 00:34:33.201 read: IOPS=9037, BW=35.3MiB/s (37.0MB/s)(35.5MiB/1005msec) 00:34:33.201 slat (nsec): min=953, max=6725.3k, avg=57005.11, stdev=433379.52 00:34:33.201 clat (usec): min=1717, max=15539, avg=7444.58, stdev=1818.15 00:34:33.201 lat (usec): min=1720, max=16619, avg=7501.58, stdev=1841.24 00:34:33.201 clat percentiles (usec): 00:34:33.201 | 1.00th=[ 3851], 5.00th=[ 5014], 10.00th=[ 5669], 20.00th=[ 6128], 00:34:33.201 | 30.00th=[ 6390], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7373], 00:34:33.201 | 70.00th=[ 8029], 80.00th=[ 8586], 90.00th=[10159], 95.00th=[11207], 00:34:33.201 | 99.00th=[12518], 99.50th=[13173], 99.90th=[15139], 99.95th=[15533], 00:34:33.201 | 99.99th=[15533] 00:34:33.201 write: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(36.0MiB/1005msec); 0 zone resets 00:34:33.201 slat (nsec): min=1583, max=6875.7k, avg=47790.15, stdev=336242.40 00:34:33.201 clat (usec): min=1139, max=14209, avg=6493.06, stdev=1523.21 00:34:33.201 lat (usec): min=1148, max=14211, avg=6540.85, stdev=1529.30 00:34:33.201 clat percentiles (usec): 00:34:33.201 | 1.00th=[ 2769], 5.00th=[ 3982], 10.00th=[ 4424], 20.00th=[ 5211], 00:34:33.201 | 30.00th=[ 5866], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6849], 00:34:33.201 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 8455], 95.00th=[ 9241], 00:34:33.201 | 99.00th=[10159], 99.50th=[11338], 99.90th=[12518], 99.95th=[13173], 00:34:33.201 | 99.99th=[14222] 00:34:33.201 bw ( KiB/s): min=36864, max=36864, per=35.42%, avg=36864.00, stdev= 0.00, samples=2 00:34:33.201 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:34:33.201 lat (msec) : 2=0.19%, 4=2.99%, 10=90.91%, 20=5.91% 00:34:33.201 cpu : usr=5.08%, sys=8.76%, ctx=657, majf=0, minf=1 00:34:33.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:34:33.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.201 issued rwts: total=9083,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.201 job1: (groupid=0, jobs=1): err= 0: pid=2003030: Wed Nov 6 13:31:14 2024 00:34:33.201 read: IOPS=4527, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1008msec) 00:34:33.201 slat (nsec): min=938, max=17418k, avg=94953.56, stdev=741679.73 00:34:33.201 clat (usec): min=1346, max=41516, avg=11516.33, stdev=5556.79 00:34:33.201 lat (usec): min=1355, max=41520, avg=11611.29, stdev=5631.39 00:34:33.201 clat percentiles (usec): 00:34:33.201 | 1.00th=[ 2802], 5.00th=[ 5342], 10.00th=[ 6652], 20.00th=[ 7767], 00:34:33.201 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[11207], 00:34:33.201 | 70.00th=[12125], 80.00th=[14746], 90.00th=[18482], 95.00th=[21365], 00:34:33.201 | 99.00th=[32637], 99.50th=[36439], 99.90th=[41681], 99.95th=[41681], 00:34:33.201 | 99.99th=[41681] 00:34:33.201 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:34:33.201 slat (nsec): min=1618, max=7042.3k, avg=107165.59, stdev=522077.19 00:34:33.201 clat (usec): min=1171, max=41507, avg=16342.58, stdev=10316.75 00:34:33.201 lat (usec): min=1183, max=41510, avg=16449.75, stdev=10391.93 00:34:33.201 clat percentiles (usec): 00:34:33.201 | 1.00th=[ 2966], 5.00th=[ 5407], 10.00th=[ 6456], 20.00th=[ 7046], 00:34:33.201 | 30.00th=[ 7832], 40.00th=[ 9503], 50.00th=[13173], 60.00th=[16909], 00:34:33.201 | 70.00th=[21365], 80.00th=[27919], 90.00th=[33424], 95.00th=[35390], 00:34:33.201 | 99.00th=[38011], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:34:33.201 | 99.99th=[41681] 00:34:33.201 bw ( KiB/s): min=15760, max=21104, per=17.71%, avg=18432.00, stdev=3778.78, samples=2 00:34:33.201 iops : min= 3940, max= 5276, avg=4608.00, stdev=944.69, samples=2 00:34:33.201 lat (msec) : 2=0.50%, 4=1.89%, 10=40.62%, 20=36.77%, 50=20.21% 00:34:33.201 cpu : usr=2.98%, sys=4.87%, ctx=451, majf=0, minf=1 00:34:33.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:33.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.201 issued rwts: total=4564,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.201 job2: (groupid=0, jobs=1): err= 0: pid=2003046: Wed Nov 6 13:31:14 2024 00:34:33.201 read: IOPS=6518, BW=25.5MiB/s (26.7MB/s)(25.5MiB/1003msec) 00:34:33.201 slat (nsec): min=956, max=8840.4k, avg=72180.02, stdev=550388.61 00:34:33.201 clat (usec): min=1891, max=26157, avg=9282.70, stdev=2858.62 00:34:33.201 lat (usec): min=2140, max=26184, avg=9354.88, stdev=2896.08 00:34:33.201 clat percentiles (usec): 00:34:33.201 | 1.00th=[ 3851], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7177], 00:34:33.201 | 30.00th=[ 7308], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9372], 00:34:33.201 | 70.00th=[10290], 80.00th=[11863], 90.00th=[12780], 95.00th=[14484], 00:34:33.201 | 99.00th=[17695], 99.50th=[18220], 99.90th=[21890], 99.95th=[22152], 00:34:33.201 | 99.99th=[26084] 00:34:33.201 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:34:33.201 slat (nsec): min=1629, max=15424k, avg=72852.81, stdev=551607.44 00:34:33.201 clat (usec): min=978, max=64046, avg=9998.52, stdev=7509.28 00:34:33.201 lat (usec): min=986, max=64055, avg=10071.37, stdev=7556.92 00:34:33.201 clat percentiles (usec): 00:34:33.201 | 1.00th=[ 2474], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 6128], 00:34:33.201 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8160], 00:34:33.201 | 70.00th=[ 9896], 80.00th=[11338], 90.00th=[17433], 95.00th=[21627], 00:34:33.201 | 99.00th=[53216], 99.50th=[58983], 99.90th=[62129], 99.95th=[62129], 00:34:33.201 | 99.99th=[64226] 00:34:33.201 bw ( KiB/s): min=20480, max=32768, per=25.58%, avg=26624.00, stdev=8688.93, samples=2 00:34:33.201 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:34:33.201 lat (usec) : 1000=0.13% 00:34:33.201 lat (msec) : 2=0.26%, 4=1.33%, 10=65.86%, 20=29.42%, 50=2.41% 00:34:33.201 lat (msec) : 100=0.59% 00:34:33.201 cpu : usr=4.79%, sys=6.89%, ctx=392, majf=0, minf=1 00:34:33.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:33.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.201 issued rwts: total=6538,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.201 job3: (groupid=0, jobs=1): err= 0: pid=2003052: Wed Nov 6 13:31:14 2024 00:34:33.201 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:34:33.201 slat (nsec): min=933, max=16136k, avg=88657.13, stdev=672762.83 00:34:33.201 clat (usec): min=2831, max=42190, avg=12077.16, stdev=6881.27 00:34:33.201 lat (usec): min=2841, max=42197, avg=12165.81, stdev=6922.19 00:34:33.201 clat percentiles (usec): 00:34:33.201 | 1.00th=[ 6063], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 8029], 00:34:33.201 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[10028], 60.00th=[10552], 00:34:33.201 | 70.00th=[11469], 80.00th=[13042], 90.00th=[22414], 95.00th=[29230], 00:34:33.201 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:34:33.201 | 99.99th=[42206] 00:34:33.201 write: IOPS=5710, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1007msec); 0 zone resets 00:34:33.201 slat (nsec): min=1533, max=13808k, avg=82664.35, stdev=536511.24 00:34:33.201 clat (usec): min=1180, max=36645, avg=10397.87, stdev=5153.51 00:34:33.201 lat (usec): min=1192, max=36651, avg=10480.53, stdev=5196.25 00:34:33.201 clat percentiles (usec): 00:34:33.201 | 1.00th=[ 2474], 5.00th=[ 4948], 10.00th=[ 6718], 20.00th=[ 7898], 00:34:33.201 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 9372], 60.00th=[ 9634], 00:34:33.201 | 70.00th=[ 9896], 80.00th=[11469], 90.00th=[16909], 95.00th=[20317], 00:34:33.201 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:34:33.201 | 99.99th=[36439] 00:34:33.201 bw ( KiB/s): min=18600, max=26504, per=21.67%, avg=22552.00, stdev=5588.97, samples=2 00:34:33.201 iops : min= 4650, max= 6626, avg=5638.00, stdev=1397.24, samples=2 00:34:33.201 lat (msec) : 2=0.27%, 4=1.09%, 10=60.24%, 20=29.81%, 50=8.58% 00:34:33.201 cpu : usr=3.48%, sys=6.06%, ctx=516, majf=0, minf=1 00:34:33.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:33.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:33.201 issued rwts: total=5632,5750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:33.201 00:34:33.201 Run status group 0 (all jobs): 00:34:33.201 READ: bw=100MiB/s (105MB/s), 17.7MiB/s-35.3MiB/s (18.5MB/s-37.0MB/s), io=101MiB (106MB), run=1003-1008msec 00:34:33.201 WRITE: bw=102MiB/s (107MB/s), 17.9MiB/s-35.8MiB/s (18.7MB/s-37.6MB/s), io=102MiB (107MB), run=1003-1008msec 00:34:33.201 00:34:33.201 Disk stats (read/write): 00:34:33.201 nvme0n1: ios=7681/7680, merge=0/0, ticks=54344/47750, in_queue=102094, util=88.98% 00:34:33.202 nvme0n2: ios=4137/4119, merge=0/0, ticks=43936/58038, in_queue=101974, util=89.01% 00:34:33.202 nvme0n3: ios=5120/5454, merge=0/0, ticks=41286/45655, in_queue=86941, util=88.56% 00:34:33.202 nvme0n4: ios=4608/4972, merge=0/0, ticks=33225/33229, in_queue=66454, util=89.60% 00:34:33.202 13:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:33.202 13:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2003113 00:34:33.202 13:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:33.202 13:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:33.202 [global] 00:34:33.202 thread=1 00:34:33.202 invalidate=1 00:34:33.202 rw=read 00:34:33.202 time_based=1 00:34:33.202 runtime=10 00:34:33.202 ioengine=libaio 00:34:33.202 direct=1 00:34:33.202 bs=4096 00:34:33.202 iodepth=1 00:34:33.202 norandommap=1 00:34:33.202 numjobs=1 00:34:33.202 00:34:33.202 [job0] 00:34:33.202 filename=/dev/nvme0n1 00:34:33.202 [job1] 00:34:33.202 filename=/dev/nvme0n2 00:34:33.202 [job2] 00:34:33.202 filename=/dev/nvme0n3 00:34:33.202 [job3] 00:34:33.202 filename=/dev/nvme0n4 00:34:33.202 Could not set queue depth (nvme0n1) 00:34:33.202 Could not set queue depth (nvme0n2) 00:34:33.202 Could not set queue depth (nvme0n3) 00:34:33.202 Could not set queue depth (nvme0n4) 00:34:33.461 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:33.461 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:33.461 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:33.461 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:33.461 fio-3.35 00:34:33.461 Starting 4 threads 00:34:36.012 13:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:36.012 13:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:36.012 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=253952, buflen=4096 00:34:36.012 fio: pid=2003508, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:36.273 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:36.273 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:36.273 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:34:36.273 fio: pid=2003499, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:36.535 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4157440, buflen=4096 00:34:36.535 fio: pid=2003452, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:36.535 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:36.535 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:36.796 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:36.796 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:36.796 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5185536, buflen=4096 00:34:36.796 fio: pid=2003473, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:36.796 00:34:36.796 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2003452: Wed Nov 6 13:31:18 2024 00:34:36.796 read: IOPS=348, BW=1394KiB/s (1428kB/s)(4060KiB/2912msec) 00:34:36.796 slat (usec): min=24, max=13537, avg=45.26, stdev=469.87 00:34:36.796 clat (usec): min=763, max=42136, avg=2795.51, stdev=8400.15 00:34:36.796 lat (usec): min=791, max=54917, avg=2840.79, stdev=8503.48 00:34:36.796 clat percentiles (usec): 00:34:36.796 | 1.00th=[ 816], 5.00th=[ 865], 10.00th=[ 906], 20.00th=[ 947], 00:34:36.796 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 988], 60.00th=[ 1004], 00:34:36.796 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1188], 00:34:36.796 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:36.796 | 99.99th=[42206] 00:34:36.796 bw ( KiB/s): min= 96, max= 3952, per=51.78%, avg=1609.60, stdev=2073.21, samples=5 00:34:36.796 iops : min= 24, max= 988, avg=402.40, stdev=518.30, samples=5 00:34:36.796 lat (usec) : 1000=57.09% 00:34:36.796 lat (msec) : 2=38.39%, 50=4.43% 00:34:36.796 cpu : usr=0.27%, sys=1.24%, ctx=1018, majf=0, minf=1 00:34:36.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.796 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.796 issued rwts: total=1016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:36.796 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2003473: Wed Nov 6 13:31:18 2024 00:34:36.796 read: IOPS=407, BW=1628KiB/s (1667kB/s)(5064KiB/3110msec) 00:34:36.796 slat (usec): min=4, max=32285, avg=52.23, stdev=984.20 00:34:36.796 clat (usec): min=350, max=42155, avg=2384.32, stdev=7861.19 00:34:36.796 lat (usec): min=357, max=42181, avg=2436.43, stdev=7918.18 00:34:36.796 clat percentiles (usec): 00:34:36.796 | 1.00th=[ 537], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 725], 00:34:36.796 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 766], 60.00th=[ 783], 00:34:36.796 | 70.00th=[ 799], 80.00th=[ 988], 90.00th=[ 1123], 95.00th=[ 1254], 00:34:36.796 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:36.796 | 99.99th=[42206] 00:34:36.796 bw ( KiB/s): min= 88, max= 5352, per=53.74%, avg=1670.17, stdev=1971.51, samples=6 00:34:36.796 iops : min= 22, max= 1338, avg=417.50, stdev=492.89, samples=6 00:34:36.796 lat (usec) : 500=0.24%, 750=35.36%, 1000=45.15% 00:34:36.796 lat (msec) : 2=15.31%, 50=3.87% 00:34:36.796 cpu : usr=0.23%, sys=0.61%, ctx=1273, majf=0, minf=2 00:34:36.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.796 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.796 issued rwts: total=1267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:36.796 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2003499: Wed Nov 6 13:31:18 2024 00:34:36.796 read: IOPS=26, BW=106KiB/s (109kB/s)(292KiB/2755msec) 00:34:36.796 slat (usec): min=9, max=2735, avg=63.43, stdev=314.95 00:34:36.796 clat (usec): min=776, max=42096, avg=37374.76, stdev=11971.16 00:34:36.796 lat (usec): min=805, max=43974, avg=37438.89, stdev=11987.04 00:34:36.796 clat percentiles (usec): 00:34:36.796 | 1.00th=[ 775], 5.00th=[ 930], 10.00th=[40633], 20.00th=[41157], 00:34:36.796 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:36.796 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:36.796 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:36.796 | 99.99th=[42206] 00:34:36.796 bw ( KiB/s): min= 96, max= 144, per=3.44%, avg=107.20, stdev=20.86, samples=5 00:34:36.796 iops : min= 24, max= 36, avg=26.80, stdev= 5.22, samples=5 00:34:36.796 lat (usec) : 1000=8.11% 00:34:36.796 lat (msec) : 2=1.35%, 50=89.19% 00:34:36.796 cpu : usr=0.15%, sys=0.00%, ctx=75, majf=0, minf=2 00:34:36.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.796 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.796 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:36.796 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2003508: Wed Nov 6 13:31:18 2024 00:34:36.796 read: IOPS=24, BW=96.4KiB/s (98.7kB/s)(248KiB/2573msec) 00:34:36.796 slat (nsec): min=25830, max=40895, avg=26672.29, stdev=1856.79 00:34:36.796 clat (usec): min=1016, max=42073, avg=41110.65, stdev=5191.02 00:34:36.796 lat (usec): min=1057, max=42099, avg=41137.32, stdev=5189.18 00:34:36.796 clat percentiles (usec): 00:34:36.796 | 1.00th=[ 1020], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:36.796 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:36.796 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:36.796 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:36.796 | 99.99th=[42206] 00:34:36.796 bw ( KiB/s): min= 96, max= 96, per=3.09%, avg=96.00, stdev= 0.00, samples=5 00:34:36.796 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:34:36.796 lat (msec) : 2=1.59%, 50=96.83% 00:34:36.796 cpu : usr=0.12%, sys=0.00%, ctx=66, majf=0, minf=2 00:34:36.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.796 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.796 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:36.796 00:34:36.796 Run status group 0 (all jobs): 00:34:36.796 READ: bw=3107KiB/s (3182kB/s), 96.4KiB/s-1628KiB/s (98.7kB/s-1667kB/s), io=9664KiB (9896kB), run=2573-3110msec 00:34:36.796 00:34:36.796 Disk stats (read/write): 00:34:36.796 nvme0n1: ios=1012/0, merge=0/0, ticks=2668/0, in_queue=2668, util=92.95% 00:34:36.796 nvme0n2: ios=1265/0, merge=0/0, ticks=2960/0, in_queue=2960, util=93.32% 00:34:36.796 nvme0n3: ios=68/0, merge=0/0, ticks=2525/0, in_queue=2525, util=95.76% 00:34:36.796 nvme0n4: ios=94/0, merge=0/0, ticks=3415/0, in_queue=3415, util=98.90% 00:34:36.796 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:36.796 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:37.058 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.058 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:37.319 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.319 13:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:37.319 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:37.319 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2003113 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:37.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:37.580 nvmf hotplug test: fio failed as expected 00:34:37.580 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:37.840 rmmod nvme_tcp 00:34:37.840 rmmod nvme_fabrics 00:34:37.840 rmmod nvme_keyring 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1999920 ']' 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1999920 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1999920 ']' 00:34:37.840 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1999920 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1999920 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1999920' 00:34:38.101 killing process with pid 1999920 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1999920 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1999920 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:38.101 13:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.648 13:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:40.648 00:34:40.648 real 0m28.138s 00:34:40.648 user 2m13.975s 00:34:40.648 sys 0m12.049s 00:34:40.648 13:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:40.648 13:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:40.648 ************************************ 00:34:40.648 END TEST nvmf_fio_target 00:34:40.648 ************************************ 00:34:40.648 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:40.648 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:40.648 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:40.648 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:40.648 ************************************ 00:34:40.648 START TEST nvmf_bdevio 00:34:40.648 ************************************ 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:40.649 * Looking for test storage... 00:34:40.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:40.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.649 --rc genhtml_branch_coverage=1 00:34:40.649 --rc genhtml_function_coverage=1 00:34:40.649 --rc genhtml_legend=1 00:34:40.649 --rc geninfo_all_blocks=1 00:34:40.649 --rc geninfo_unexecuted_blocks=1 00:34:40.649 00:34:40.649 ' 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:40.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.649 --rc genhtml_branch_coverage=1 00:34:40.649 --rc genhtml_function_coverage=1 00:34:40.649 --rc genhtml_legend=1 00:34:40.649 --rc geninfo_all_blocks=1 00:34:40.649 --rc geninfo_unexecuted_blocks=1 00:34:40.649 00:34:40.649 ' 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:40.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.649 --rc genhtml_branch_coverage=1 00:34:40.649 --rc genhtml_function_coverage=1 00:34:40.649 --rc genhtml_legend=1 00:34:40.649 --rc geninfo_all_blocks=1 00:34:40.649 --rc geninfo_unexecuted_blocks=1 00:34:40.649 00:34:40.649 ' 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:40.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.649 --rc genhtml_branch_coverage=1 00:34:40.649 --rc genhtml_function_coverage=1 00:34:40.649 --rc genhtml_legend=1 00:34:40.649 --rc geninfo_all_blocks=1 00:34:40.649 --rc geninfo_unexecuted_blocks=1 00:34:40.649 00:34:40.649 ' 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:40.649 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:40.650 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:48.884 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.884 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:48.884 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:48.884 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:48.884 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:48.884 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:48.885 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:48.885 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:48.885 Found net devices under 0000:31:00.0: cvl_0_0 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:48.885 Found net devices under 0000:31:00.1: cvl_0_1 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:48.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:34:48.885 00:34:48.885 --- 10.0.0.2 ping statistics --- 00:34:48.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.885 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:34:48.885 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:34:48.885 00:34:48.885 --- 10.0.0.1 ping statistics --- 00:34:48.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.885 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2008585 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2008585 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2008585 ']' 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:48.886 13:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:48.886 [2024-11-06 13:31:29.961579] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:48.886 [2024-11-06 13:31:29.962729] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:34:48.886 [2024-11-06 13:31:29.962786] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.886 [2024-11-06 13:31:30.065969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:48.886 [2024-11-06 13:31:30.118393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.886 [2024-11-06 13:31:30.118442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.886 [2024-11-06 13:31:30.118451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.886 [2024-11-06 13:31:30.118460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.886 [2024-11-06 13:31:30.118466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.886 [2024-11-06 13:31:30.120296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:48.886 [2024-11-06 13:31:30.120470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:48.886 [2024-11-06 13:31:30.120637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:48.886 [2024-11-06 13:31:30.120638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:48.886 [2024-11-06 13:31:30.206358] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:48.886 [2024-11-06 13:31:30.207300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:48.886 [2024-11-06 13:31:30.207688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:48.886 [2024-11-06 13:31:30.208258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:48.886 [2024-11-06 13:31:30.208312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:48.886 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:48.886 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:34:48.886 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:48.886 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.886 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.146 [2024-11-06 13:31:30.829532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.146 Malloc0 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:49.146 [2024-11-06 13:31:30.921844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:49.146 { 00:34:49.146 "params": { 00:34:49.146 "name": "Nvme$subsystem", 00:34:49.146 "trtype": "$TEST_TRANSPORT", 00:34:49.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:49.146 "adrfam": "ipv4", 00:34:49.146 "trsvcid": "$NVMF_PORT", 00:34:49.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:49.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:49.146 "hdgst": ${hdgst:-false}, 00:34:49.146 "ddgst": ${ddgst:-false} 00:34:49.146 }, 00:34:49.146 "method": "bdev_nvme_attach_controller" 00:34:49.146 } 00:34:49.146 EOF 00:34:49.146 )") 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:49.146 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:49.146 "params": { 00:34:49.146 "name": "Nvme1", 00:34:49.146 "trtype": "tcp", 00:34:49.146 "traddr": "10.0.0.2", 00:34:49.146 "adrfam": "ipv4", 00:34:49.146 "trsvcid": "4420", 00:34:49.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:49.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:49.146 "hdgst": false, 00:34:49.146 "ddgst": false 00:34:49.146 }, 00:34:49.146 "method": "bdev_nvme_attach_controller" 00:34:49.146 }' 00:34:49.146 [2024-11-06 13:31:30.978865] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:34:49.146 [2024-11-06 13:31:30.978937] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008689 ] 00:34:49.405 [2024-11-06 13:31:31.075015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:49.405 [2024-11-06 13:31:31.132738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.405 [2024-11-06 13:31:31.132900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:49.405 [2024-11-06 13:31:31.133078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:49.664 I/O targets: 00:34:49.664 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:49.664 00:34:49.664 00:34:49.664 CUnit - A unit testing framework for C - Version 2.1-3 00:34:49.664 http://cunit.sourceforge.net/ 00:34:49.664 00:34:49.664 00:34:49.664 Suite: bdevio tests on: Nvme1n1 00:34:49.664 Test: blockdev write read block ...passed 00:34:49.664 Test: blockdev write zeroes read block ...passed 00:34:49.664 Test: blockdev write zeroes read no split ...passed 00:34:49.664 Test: blockdev write zeroes read split ...passed 00:34:49.664 Test: blockdev write zeroes read split partial ...passed 00:34:49.664 Test: blockdev reset ...[2024-11-06 13:31:31.505720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:49.664 [2024-11-06 13:31:31.505832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13081c0 (9): Bad file descriptor 00:34:49.924 [2024-11-06 13:31:31.641619] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:49.924 passed 00:34:49.924 Test: blockdev write read 8 blocks ...passed 00:34:49.924 Test: blockdev write read size > 128k ...passed 00:34:49.924 Test: blockdev write read invalid size ...passed 00:34:49.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:49.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:49.924 Test: blockdev write read max offset ...passed 00:34:49.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:49.924 Test: blockdev writev readv 8 blocks ...passed 00:34:49.924 Test: blockdev writev readv 30 x 1block ...passed 00:34:49.924 Test: blockdev writev readv block ...passed 00:34:50.184 Test: blockdev writev readv size > 128k ...passed 00:34:50.184 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:50.184 Test: blockdev comparev and writev ...[2024-11-06 13:31:31.829223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.184 [2024-11-06 13:31:31.829273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.829290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.184 [2024-11-06 13:31:31.829299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.829960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.184 [2024-11-06 13:31:31.829974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.829990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.184 [2024-11-06 13:31:31.830000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.830665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.184 [2024-11-06 13:31:31.830679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.830693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.184 [2024-11-06 13:31:31.830702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.831328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.184 [2024-11-06 13:31:31.831342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.831356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:50.184 [2024-11-06 13:31:31.831365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:50.184 passed 00:34:50.184 Test: blockdev nvme passthru rw ...passed 00:34:50.184 Test: blockdev nvme passthru vendor specific ...[2024-11-06 13:31:31.915694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:50.184 [2024-11-06 13:31:31.915712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.916096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:50.184 [2024-11-06 13:31:31.916108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.916495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:50.184 [2024-11-06 13:31:31.916506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:50.184 [2024-11-06 13:31:31.916902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:50.184 [2024-11-06 13:31:31.916921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:50.184 passed 00:34:50.184 Test: blockdev nvme admin passthru ...passed 00:34:50.184 Test: blockdev copy ...passed 00:34:50.184 00:34:50.184 Run Summary: Type Total Ran Passed Failed Inactive 00:34:50.184 suites 1 1 n/a 0 0 00:34:50.184 tests 23 23 23 0 0 00:34:50.184 asserts 152 152 152 0 n/a 00:34:50.184 00:34:50.184 Elapsed time = 1.292 seconds 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:50.445 rmmod nvme_tcp 00:34:50.445 rmmod nvme_fabrics 00:34:50.445 rmmod nvme_keyring 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2008585 ']' 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2008585 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2008585 ']' 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2008585 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2008585 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2008585' 00:34:50.445 killing process with pid 2008585 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2008585 00:34:50.445 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2008585 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:50.706 13:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.248 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:53.248 00:34:53.248 real 0m12.455s 00:34:53.248 user 0m10.019s 00:34:53.248 sys 0m6.618s 00:34:53.248 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:53.248 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:53.248 ************************************ 00:34:53.248 END TEST nvmf_bdevio 00:34:53.248 ************************************ 00:34:53.248 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:53.248 00:34:53.248 real 5m1.022s 00:34:53.248 user 10m7.394s 00:34:53.248 sys 2m3.316s 00:34:53.248 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:53.248 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:53.248 ************************************ 00:34:53.248 END TEST nvmf_target_core_interrupt_mode 00:34:53.248 ************************************ 00:34:53.248 13:31:34 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:53.248 13:31:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:53.248 13:31:34 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:53.248 13:31:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.248 ************************************ 00:34:53.248 START TEST nvmf_interrupt 00:34:53.248 ************************************ 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:53.248 * Looking for test storage... 00:34:53.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:53.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.248 --rc genhtml_branch_coverage=1 00:34:53.248 --rc genhtml_function_coverage=1 00:34:53.248 --rc genhtml_legend=1 00:34:53.248 --rc geninfo_all_blocks=1 00:34:53.248 --rc geninfo_unexecuted_blocks=1 00:34:53.248 00:34:53.248 ' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:53.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.248 --rc genhtml_branch_coverage=1 00:34:53.248 --rc genhtml_function_coverage=1 00:34:53.248 --rc genhtml_legend=1 00:34:53.248 --rc geninfo_all_blocks=1 00:34:53.248 --rc geninfo_unexecuted_blocks=1 00:34:53.248 00:34:53.248 ' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:53.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.248 --rc genhtml_branch_coverage=1 00:34:53.248 --rc genhtml_function_coverage=1 00:34:53.248 --rc genhtml_legend=1 00:34:53.248 --rc geninfo_all_blocks=1 00:34:53.248 --rc geninfo_unexecuted_blocks=1 00:34:53.248 00:34:53.248 ' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:53.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.248 --rc genhtml_branch_coverage=1 00:34:53.248 --rc genhtml_function_coverage=1 00:34:53.248 --rc genhtml_legend=1 00:34:53.248 --rc geninfo_all_blocks=1 00:34:53.248 --rc geninfo_unexecuted_blocks=1 00:34:53.248 00:34:53.248 ' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:53.248 13:31:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:01.382 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:01.382 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:01.382 Found net devices under 0000:31:00.0: cvl_0_0 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:01.382 Found net devices under 0000:31:00.1: cvl_0_1 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.382 13:31:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.382 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.382 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.382 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:01.382 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.382 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.382 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.382 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:01.382 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:01.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:35:01.382 00:35:01.382 --- 10.0.0.2 ping statistics --- 00:35:01.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.382 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:35:01.382 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:35:01.382 00:35:01.382 --- 10.0.0.1 ping statistics --- 00:35:01.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.383 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2013141 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2013141 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 2013141 ']' 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:01.383 13:31:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.383 [2024-11-06 13:31:42.376381] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:01.383 [2024-11-06 13:31:42.377880] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:35:01.383 [2024-11-06 13:31:42.377947] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.383 [2024-11-06 13:31:42.478238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:01.383 [2024-11-06 13:31:42.530370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.383 [2024-11-06 13:31:42.530421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.383 [2024-11-06 13:31:42.530430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.383 [2024-11-06 13:31:42.530437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.383 [2024-11-06 13:31:42.530444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.383 [2024-11-06 13:31:42.532136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.383 [2024-11-06 13:31:42.532141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.383 [2024-11-06 13:31:42.608032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:01.383 [2024-11-06 13:31:42.608808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:01.383 [2024-11-06 13:31:42.609002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:01.383 5000+0 records in 00:35:01.383 5000+0 records out 00:35:01.383 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0186584 s, 549 MB/s 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.383 AIO0 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.383 [2024-11-06 13:31:43.265187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.383 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.643 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:01.644 [2024-11-06 13:31:43.309528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2013141 0 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2013141 0 idle 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2013141 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2013141 -w 256 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2013141 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.33 reactor_0' 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2013141 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.33 reactor_0 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2013141 1 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2013141 1 idle 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2013141 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2013141 -w 256 00:35:01.644 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2013196 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2013196 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2013443 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2013141 0 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2013141 0 busy 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2013141 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2013141 -w 256 00:35:01.905 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2013141 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.49 reactor_0' 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2013141 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.49 reactor_0 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2013141 1 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2013141 1 busy 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2013141 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2013141 -w 256 00:35:02.166 13:31:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:02.166 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2013196 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.27 reactor_1' 00:35:02.166 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2013196 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:00.27 reactor_1 00:35:02.166 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:02.166 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:02.428 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:02.428 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:02.428 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:02.428 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:02.428 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:02.428 13:31:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:02.428 13:31:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2013443 00:35:12.420 Initializing NVMe Controllers 00:35:12.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:12.420 Controller IO queue size 256, less than required. 00:35:12.420 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:12.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:12.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:12.420 Initialization complete. Launching workers. 00:35:12.420 ======================================================== 00:35:12.420 Latency(us) 00:35:12.420 Device Information : IOPS MiB/s Average min max 00:35:12.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19742.70 77.12 12971.90 3777.53 31407.69 00:35:12.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19361.70 75.63 13223.79 7476.36 27724.67 00:35:12.420 ======================================================== 00:35:12.420 Total : 39104.40 152.75 13096.62 3777.53 31407.69 00:35:12.420 00:35:12.420 13:31:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:12.420 13:31:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2013141 0 00:35:12.420 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2013141 0 idle 00:35:12.420 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2013141 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2013141 -w 256 00:35:12.421 13:31:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2013141 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.33 reactor_0' 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2013141 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.33 reactor_0 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2013141 1 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2013141 1 idle 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2013141 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2013141 -w 256 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2013196 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2013196 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:12.421 13:31:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:12.993 13:31:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:12.993 13:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:35:12.993 13:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:12.993 13:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:35:12.993 13:31:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2013141 0 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2013141 0 idle 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2013141 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2013141 -w 256 00:35:15.538 13:31:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2013141 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.71 reactor_0' 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2013141 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.71 reactor_0 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2013141 1 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2013141 1 idle 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2013141 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:15.538 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2013141 -w 256 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2013196 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2013196 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:15.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:15.539 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:15.799 rmmod nvme_tcp 00:35:15.799 rmmod nvme_fabrics 00:35:15.799 rmmod nvme_keyring 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2013141 ']' 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2013141 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 2013141 ']' 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 2013141 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2013141 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2013141' 00:35:15.799 killing process with pid 2013141 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 2013141 00:35:15.799 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 2013141 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:16.059 13:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.969 13:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.969 00:35:17.969 real 0m25.144s 00:35:17.969 user 0m40.227s 00:35:17.969 sys 0m9.475s 00:35:17.969 13:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:17.969 13:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:17.969 ************************************ 00:35:17.969 END TEST nvmf_interrupt 00:35:17.969 ************************************ 00:35:17.969 00:35:17.969 real 30m18.805s 00:35:17.969 user 61m25.899s 00:35:17.969 sys 10m18.454s 00:35:17.969 13:31:59 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:17.969 13:31:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:17.969 ************************************ 00:35:17.969 END TEST nvmf_tcp 00:35:17.969 ************************************ 00:35:18.228 13:31:59 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:18.229 13:31:59 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:18.229 13:31:59 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:18.229 13:31:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:18.229 13:31:59 -- common/autotest_common.sh@10 -- # set +x 00:35:18.229 ************************************ 00:35:18.229 START TEST spdkcli_nvmf_tcp 00:35:18.229 ************************************ 00:35:18.229 13:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:18.229 * Looking for test storage... 00:35:18.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:18.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.229 --rc genhtml_branch_coverage=1 00:35:18.229 --rc genhtml_function_coverage=1 00:35:18.229 --rc genhtml_legend=1 00:35:18.229 --rc geninfo_all_blocks=1 00:35:18.229 --rc geninfo_unexecuted_blocks=1 00:35:18.229 00:35:18.229 ' 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:18.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.229 --rc genhtml_branch_coverage=1 00:35:18.229 --rc genhtml_function_coverage=1 00:35:18.229 --rc genhtml_legend=1 00:35:18.229 --rc geninfo_all_blocks=1 00:35:18.229 --rc geninfo_unexecuted_blocks=1 00:35:18.229 00:35:18.229 ' 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:18.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.229 --rc genhtml_branch_coverage=1 00:35:18.229 --rc genhtml_function_coverage=1 00:35:18.229 --rc genhtml_legend=1 00:35:18.229 --rc geninfo_all_blocks=1 00:35:18.229 --rc geninfo_unexecuted_blocks=1 00:35:18.229 00:35:18.229 ' 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:18.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.229 --rc genhtml_branch_coverage=1 00:35:18.229 --rc genhtml_function_coverage=1 00:35:18.229 --rc genhtml_legend=1 00:35:18.229 --rc geninfo_all_blocks=1 00:35:18.229 --rc geninfo_unexecuted_blocks=1 00:35:18.229 00:35:18.229 ' 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:18.229 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:18.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:18.490 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2016644 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2016644 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 2016644 ']' 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:18.491 13:32:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.491 [2024-11-06 13:32:00.207531] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:35:18.491 [2024-11-06 13:32:00.207597] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016644 ] 00:35:18.491 [2024-11-06 13:32:00.288915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:18.491 [2024-11-06 13:32:00.347779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.491 [2024-11-06 13:32:00.347810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:19.431 13:32:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:19.431 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:19.431 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:19.431 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:19.431 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:19.431 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:19.431 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:19.431 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:19.431 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:19.431 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:19.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:19.431 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:19.431 ' 00:35:21.974 [2024-11-06 13:32:03.781743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.356 [2024-11-06 13:32:05.141965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:25.896 [2024-11-06 13:32:07.669103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:28.437 [2024-11-06 13:32:09.887317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:29.821 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:29.821 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:29.821 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:29.821 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:29.821 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:29.821 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:29.821 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:29.821 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:29.821 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:29.821 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:29.821 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:29.821 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:29.821 13:32:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:29.821 13:32:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:29.821 13:32:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:29.821 13:32:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:29.821 13:32:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:29.821 13:32:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:29.821 13:32:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:29.821 13:32:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:30.395 13:32:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:30.395 13:32:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:30.395 13:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:30.395 13:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:30.395 13:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.395 13:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:30.395 13:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:30.395 13:32:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.395 13:32:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:30.395 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:30.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:30.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:30.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:30.396 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:30.396 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:30.396 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:30.396 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:30.396 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:30.396 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:30.396 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:30.396 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:30.396 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:30.396 ' 00:35:36.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:36.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:36.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:36.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:36.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:36.980 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:36.980 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:36.980 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:36.980 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:36.980 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:36.980 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:36.980 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:36.980 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:36.980 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2016644 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2016644 ']' 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2016644 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2016644 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2016644' 00:35:36.980 killing process with pid 2016644 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 2016644 00:35:36.980 13:32:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 2016644 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2016644 ']' 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2016644 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2016644 ']' 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2016644 00:35:36.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2016644) - No such process 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 2016644 is not found' 00:35:36.980 Process with pid 2016644 is not found 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:36.980 00:35:36.980 real 0m18.143s 00:35:36.980 user 0m40.274s 00:35:36.980 sys 0m0.888s 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:36.980 13:32:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:36.980 ************************************ 00:35:36.980 END TEST spdkcli_nvmf_tcp 00:35:36.980 ************************************ 00:35:36.980 13:32:18 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:36.980 13:32:18 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:36.980 13:32:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:36.980 13:32:18 -- common/autotest_common.sh@10 -- # set +x 00:35:36.980 ************************************ 00:35:36.980 START TEST nvmf_identify_passthru 00:35:36.980 ************************************ 00:35:36.980 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:36.980 * Looking for test storage... 00:35:36.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:36.980 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:36.980 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:36.980 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:36.980 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:36.980 13:32:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:36.981 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:36.981 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:36.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.981 --rc genhtml_branch_coverage=1 00:35:36.981 --rc genhtml_function_coverage=1 00:35:36.981 --rc genhtml_legend=1 00:35:36.981 --rc geninfo_all_blocks=1 00:35:36.981 --rc geninfo_unexecuted_blocks=1 00:35:36.981 00:35:36.981 ' 00:35:36.981 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:36.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.981 --rc genhtml_branch_coverage=1 00:35:36.981 --rc genhtml_function_coverage=1 00:35:36.981 --rc genhtml_legend=1 00:35:36.981 --rc geninfo_all_blocks=1 00:35:36.981 --rc geninfo_unexecuted_blocks=1 00:35:36.981 00:35:36.981 ' 00:35:36.981 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:36.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.981 --rc genhtml_branch_coverage=1 00:35:36.981 --rc genhtml_function_coverage=1 00:35:36.981 --rc genhtml_legend=1 00:35:36.981 --rc geninfo_all_blocks=1 00:35:36.981 --rc geninfo_unexecuted_blocks=1 00:35:36.981 00:35:36.981 ' 00:35:36.981 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:36.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:36.981 --rc genhtml_branch_coverage=1 00:35:36.981 --rc genhtml_function_coverage=1 00:35:36.981 --rc genhtml_legend=1 00:35:36.981 --rc geninfo_all_blocks=1 00:35:36.981 --rc geninfo_unexecuted_blocks=1 00:35:36.981 00:35:36.981 ' 00:35:36.981 13:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:36.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:36.981 13:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:36.981 13:32:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:36.981 13:32:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:36.981 13:32:18 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.981 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:36.981 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:36.981 13:32:18 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:36.981 13:32:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:45.126 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:45.126 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:45.126 Found net devices under 0000:31:00.0: cvl_0_0 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.126 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:45.127 Found net devices under 0000:31:00.1: cvl_0_1 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:35:45.127 00:35:45.127 --- 10.0.0.2 ping statistics --- 00:35:45.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.127 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:35:45.127 00:35:45.127 --- 10.0.0.1 ping statistics --- 00:35:45.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.127 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.127 13:32:25 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.127 13:32:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.127 13:32:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:45.127 13:32:25 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:45.127 13:32:26 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:45.127 13:32:26 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:45.127 13:32:26 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:45.127 13:32:26 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:45.127 13:32:26 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:45.127 13:32:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:45.127 13:32:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:45.127 13:32:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:45.127 13:32:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605500 00:35:45.127 13:32:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:45.127 13:32:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:45.127 13:32:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:45.388 13:32:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:45.388 13:32:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.388 13:32:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.388 13:32:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2024063 00:35:45.388 13:32:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:45.388 13:32:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:45.388 13:32:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2024063 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 2024063 ']' 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:45.388 13:32:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:45.388 [2024-11-06 13:32:27.183608] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:35:45.388 [2024-11-06 13:32:27.183674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.388 [2024-11-06 13:32:27.283001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:45.649 [2024-11-06 13:32:27.338708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.649 [2024-11-06 13:32:27.338773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.649 [2024-11-06 13:32:27.338783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.649 [2024-11-06 13:32:27.338790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.649 [2024-11-06 13:32:27.338797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.649 [2024-11-06 13:32:27.340665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.649 [2024-11-06 13:32:27.340819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.649 [2024-11-06 13:32:27.340899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.649 [2024-11-06 13:32:27.340899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:35:46.221 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.221 INFO: Log level set to 20 00:35:46.221 INFO: Requests: 00:35:46.221 { 00:35:46.221 "jsonrpc": "2.0", 00:35:46.221 "method": "nvmf_set_config", 00:35:46.221 "id": 1, 00:35:46.221 "params": { 00:35:46.221 "admin_cmd_passthru": { 00:35:46.221 "identify_ctrlr": true 00:35:46.221 } 00:35:46.221 } 00:35:46.221 } 00:35:46.221 00:35:46.221 INFO: response: 00:35:46.221 { 00:35:46.221 "jsonrpc": "2.0", 00:35:46.221 "id": 1, 00:35:46.221 "result": true 00:35:46.221 } 00:35:46.221 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.221 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.221 INFO: Setting log level to 20 00:35:46.221 INFO: Setting log level to 20 00:35:46.221 INFO: Log level set to 20 00:35:46.221 INFO: Log level set to 20 00:35:46.221 INFO: Requests: 00:35:46.221 { 00:35:46.221 "jsonrpc": "2.0", 00:35:46.221 "method": "framework_start_init", 00:35:46.221 "id": 1 00:35:46.221 } 00:35:46.221 00:35:46.221 INFO: Requests: 00:35:46.221 { 00:35:46.221 "jsonrpc": "2.0", 00:35:46.221 "method": "framework_start_init", 00:35:46.221 "id": 1 00:35:46.221 } 00:35:46.221 00:35:46.221 [2024-11-06 13:32:28.103814] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:46.221 INFO: response: 00:35:46.221 { 00:35:46.221 "jsonrpc": "2.0", 00:35:46.221 "id": 1, 00:35:46.221 "result": true 00:35:46.221 } 00:35:46.221 00:35:46.221 INFO: response: 00:35:46.221 { 00:35:46.221 "jsonrpc": "2.0", 00:35:46.221 "id": 1, 00:35:46.221 "result": true 00:35:46.221 } 00:35:46.221 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.221 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.221 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.221 INFO: Setting log level to 40 00:35:46.221 INFO: Setting log level to 40 00:35:46.221 INFO: Setting log level to 40 00:35:46.221 [2024-11-06 13:32:28.117378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.481 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.481 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:46.481 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:46.481 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.481 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:46.481 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.481 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.742 Nvme0n1 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.742 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.742 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.742 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.742 [2024-11-06 13:32:28.523691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.742 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:46.742 [ 00:35:46.742 { 00:35:46.742 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:46.742 "subtype": "Discovery", 00:35:46.742 "listen_addresses": [], 00:35:46.742 "allow_any_host": true, 00:35:46.742 "hosts": [] 00:35:46.742 }, 00:35:46.742 { 00:35:46.742 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:46.742 "subtype": "NVMe", 00:35:46.742 "listen_addresses": [ 00:35:46.742 { 00:35:46.742 "trtype": "TCP", 00:35:46.742 "adrfam": "IPv4", 00:35:46.742 "traddr": "10.0.0.2", 00:35:46.742 "trsvcid": "4420" 00:35:46.742 } 00:35:46.742 ], 00:35:46.742 "allow_any_host": true, 00:35:46.742 "hosts": [], 00:35:46.742 "serial_number": "SPDK00000000000001", 00:35:46.742 "model_number": "SPDK bdev Controller", 00:35:46.742 "max_namespaces": 1, 00:35:46.742 "min_cntlid": 1, 00:35:46.742 "max_cntlid": 65519, 00:35:46.742 "namespaces": [ 00:35:46.742 { 00:35:46.742 "nsid": 1, 00:35:46.742 "bdev_name": "Nvme0n1", 00:35:46.742 "name": "Nvme0n1", 00:35:46.742 "nguid": "36344730526055000025384500000031", 00:35:46.742 "uuid": "36344730-5260-5500-0025-384500000031" 00:35:46.742 } 00:35:46.742 ] 00:35:46.742 } 00:35:46.742 ] 00:35:46.742 13:32:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.742 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:46.742 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:46.742 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:47.003 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605500 00:35:47.003 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:47.003 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:47.003 13:32:28 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:47.263 13:32:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:47.263 13:32:29 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605500 '!=' S64GNE0R605500 ']' 00:35:47.263 13:32:29 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:47.263 13:32:29 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:47.264 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.264 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:47.264 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.264 13:32:29 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:47.264 13:32:29 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:47.264 rmmod nvme_tcp 00:35:47.264 rmmod nvme_fabrics 00:35:47.264 rmmod nvme_keyring 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2024063 ']' 00:35:47.264 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2024063 00:35:47.264 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 2024063 ']' 00:35:47.264 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 2024063 00:35:47.264 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:35:47.264 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:47.264 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2024063 00:35:47.524 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:47.524 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:47.524 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2024063' 00:35:47.524 killing process with pid 2024063 00:35:47.524 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 2024063 00:35:47.524 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 2024063 00:35:47.524 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:47.784 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:47.784 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:47.784 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:47.784 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:47.784 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:47.784 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:47.784 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:47.784 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:47.784 13:32:29 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.784 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:47.784 13:32:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.699 13:32:31 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:49.699 00:35:49.699 real 0m13.375s 00:35:49.699 user 0m10.581s 00:35:49.699 sys 0m6.860s 00:35:49.699 13:32:31 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:49.699 13:32:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:49.699 ************************************ 00:35:49.699 END TEST nvmf_identify_passthru 00:35:49.699 ************************************ 00:35:49.699 13:32:31 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:49.699 13:32:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:49.699 13:32:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:49.699 13:32:31 -- common/autotest_common.sh@10 -- # set +x 00:35:49.699 ************************************ 00:35:49.699 START TEST nvmf_dif 00:35:49.699 ************************************ 00:35:49.699 13:32:31 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:49.960 * Looking for test storage... 00:35:49.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:49.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.961 --rc genhtml_branch_coverage=1 00:35:49.961 --rc genhtml_function_coverage=1 00:35:49.961 --rc genhtml_legend=1 00:35:49.961 --rc geninfo_all_blocks=1 00:35:49.961 --rc geninfo_unexecuted_blocks=1 00:35:49.961 00:35:49.961 ' 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:49.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.961 --rc genhtml_branch_coverage=1 00:35:49.961 --rc genhtml_function_coverage=1 00:35:49.961 --rc genhtml_legend=1 00:35:49.961 --rc geninfo_all_blocks=1 00:35:49.961 --rc geninfo_unexecuted_blocks=1 00:35:49.961 00:35:49.961 ' 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:49.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.961 --rc genhtml_branch_coverage=1 00:35:49.961 --rc genhtml_function_coverage=1 00:35:49.961 --rc genhtml_legend=1 00:35:49.961 --rc geninfo_all_blocks=1 00:35:49.961 --rc geninfo_unexecuted_blocks=1 00:35:49.961 00:35:49.961 ' 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:49.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:49.961 --rc genhtml_branch_coverage=1 00:35:49.961 --rc genhtml_function_coverage=1 00:35:49.961 --rc genhtml_legend=1 00:35:49.961 --rc geninfo_all_blocks=1 00:35:49.961 --rc geninfo_unexecuted_blocks=1 00:35:49.961 00:35:49.961 ' 00:35:49.961 13:32:31 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.961 13:32:31 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.961 13:32:31 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.961 13:32:31 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.961 13:32:31 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.961 13:32:31 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:49.961 13:32:31 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:49.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:49.961 13:32:31 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:49.961 13:32:31 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:49.961 13:32:31 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:49.961 13:32:31 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:49.961 13:32:31 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:49.961 13:32:31 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:49.961 13:32:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.962 13:32:31 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:49.962 13:32:31 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:49.962 13:32:31 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:49.962 13:32:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:58.154 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:58.154 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:58.154 Found net devices under 0000:31:00.0: cvl_0_0 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:58.154 Found net devices under 0000:31:00.1: cvl_0_1 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:58.154 13:32:38 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.154 13:32:39 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.154 13:32:39 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.154 13:32:39 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:58.155 13:32:39 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:58.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:35:58.155 00:35:58.155 --- 10.0.0.2 ping statistics --- 00:35:58.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.155 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:35:58.155 13:32:39 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:35:58.155 00:35:58.155 --- 10.0.0.1 ping statistics --- 00:35:58.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.155 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:35:58.155 13:32:39 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.155 13:32:39 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:58.155 13:32:39 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:58.155 13:32:39 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:00.703 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:00.703 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:00.703 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:00.964 13:32:42 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:00.964 13:32:42 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:00.964 13:32:42 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:00.964 13:32:42 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:00.964 13:32:42 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:00.964 13:32:42 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:01.224 13:32:42 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:01.224 13:32:42 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:01.224 13:32:42 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:01.224 13:32:42 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:01.224 13:32:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:01.224 13:32:42 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2030289 00:36:01.224 13:32:42 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2030289 00:36:01.224 13:32:42 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:01.224 13:32:42 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 2030289 ']' 00:36:01.224 13:32:42 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.224 13:32:42 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:01.224 13:32:42 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.224 13:32:42 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:01.224 13:32:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:01.224 [2024-11-06 13:32:42.951000] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:36:01.224 [2024-11-06 13:32:42.951047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.224 [2024-11-06 13:32:43.044584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.224 [2024-11-06 13:32:43.079178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.224 [2024-11-06 13:32:43.079215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.224 [2024-11-06 13:32:43.079223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.224 [2024-11-06 13:32:43.079230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.224 [2024-11-06 13:32:43.079236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.224 [2024-11-06 13:32:43.079823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:36:02.168 13:32:43 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.168 13:32:43 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.168 13:32:43 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:02.168 13:32:43 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.168 [2024-11-06 13:32:43.796020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.168 13:32:43 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:02.168 13:32:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:02.168 ************************************ 00:36:02.168 START TEST fio_dif_1_default 00:36:02.168 ************************************ 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:02.168 bdev_null0 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:02.168 [2024-11-06 13:32:43.884431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.168 13:32:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.168 { 00:36:02.168 "params": { 00:36:02.168 "name": "Nvme$subsystem", 00:36:02.168 "trtype": "$TEST_TRANSPORT", 00:36:02.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.168 "adrfam": "ipv4", 00:36:02.168 "trsvcid": "$NVMF_PORT", 00:36:02.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.168 "hdgst": ${hdgst:-false}, 00:36:02.168 "ddgst": ${ddgst:-false} 00:36:02.168 }, 00:36:02.168 "method": "bdev_nvme_attach_controller" 00:36:02.168 } 00:36:02.168 EOF 00:36:02.168 )") 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:02.169 "params": { 00:36:02.169 "name": "Nvme0", 00:36:02.169 "trtype": "tcp", 00:36:02.169 "traddr": "10.0.0.2", 00:36:02.169 "adrfam": "ipv4", 00:36:02.169 "trsvcid": "4420", 00:36:02.169 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:02.169 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:02.169 "hdgst": false, 00:36:02.169 "ddgst": false 00:36:02.169 }, 00:36:02.169 "method": "bdev_nvme_attach_controller" 00:36:02.169 }' 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:02.169 13:32:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:02.430 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:02.430 fio-3.35 00:36:02.430 Starting 1 thread 00:36:14.808 00:36:14.808 filename0: (groupid=0, jobs=1): err= 0: pid=2030823: Wed Nov 6 13:32:55 2024 00:36:14.808 read: IOPS=407, BW=1630KiB/s (1670kB/s)(16.0MiB/10029msec) 00:36:14.808 slat (nsec): min=5432, max=88892, avg=7228.72, stdev=1919.15 00:36:14.808 clat (usec): min=394, max=44043, avg=9793.65, stdev=16834.47 00:36:14.808 lat (usec): min=399, max=44087, avg=9800.88, stdev=16833.87 00:36:14.808 clat percentiles (usec): 00:36:14.808 | 1.00th=[ 502], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 758], 00:36:14.808 | 30.00th=[ 791], 40.00th=[ 816], 50.00th=[ 832], 60.00th=[ 848], 00:36:14.808 | 70.00th=[ 881], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:14.808 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[43779], 00:36:14.808 | 99.99th=[44303] 00:36:14.808 bw ( KiB/s): min= 704, max= 9984, per=100.00%, avg=1633.60, stdev=2204.14, samples=20 00:36:14.808 iops : min= 176, max= 2496, avg=408.40, stdev=551.03, samples=20 00:36:14.808 lat (usec) : 500=0.93%, 750=17.78%, 1000=56.43% 00:36:14.808 lat (msec) : 2=2.64%, 50=22.21% 00:36:14.808 cpu : usr=93.40%, sys=6.34%, ctx=23, majf=0, minf=209 00:36:14.808 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:14.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:14.808 issued rwts: total=4088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:14.808 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:14.808 00:36:14.808 Run status group 0 (all jobs): 00:36:14.808 READ: bw=1630KiB/s (1670kB/s), 1630KiB/s-1630KiB/s (1670kB/s-1670kB/s), io=16.0MiB (16.7MB), run=10029-10029msec 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.808 00:36:14.808 real 0m11.338s 00:36:14.808 user 0m19.612s 00:36:14.808 sys 0m1.022s 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 ************************************ 00:36:14.808 END TEST fio_dif_1_default 00:36:14.808 ************************************ 00:36:14.808 13:32:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:14.808 13:32:55 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:14.808 13:32:55 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:14.808 13:32:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 ************************************ 00:36:14.808 START TEST fio_dif_1_multi_subsystems 00:36:14.808 ************************************ 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 bdev_null0 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 [2024-11-06 13:32:55.301368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.808 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 bdev_null1 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:14.809 { 00:36:14.809 "params": { 00:36:14.809 "name": "Nvme$subsystem", 00:36:14.809 "trtype": "$TEST_TRANSPORT", 00:36:14.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.809 "adrfam": "ipv4", 00:36:14.809 "trsvcid": "$NVMF_PORT", 00:36:14.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.809 "hdgst": ${hdgst:-false}, 00:36:14.809 "ddgst": ${ddgst:-false} 00:36:14.809 }, 00:36:14.809 "method": "bdev_nvme_attach_controller" 00:36:14.809 } 00:36:14.809 EOF 00:36:14.809 )") 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:14.809 { 00:36:14.809 "params": { 00:36:14.809 "name": "Nvme$subsystem", 00:36:14.809 "trtype": "$TEST_TRANSPORT", 00:36:14.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:14.809 "adrfam": "ipv4", 00:36:14.809 "trsvcid": "$NVMF_PORT", 00:36:14.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:14.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:14.809 "hdgst": ${hdgst:-false}, 00:36:14.809 "ddgst": ${ddgst:-false} 00:36:14.809 }, 00:36:14.809 "method": "bdev_nvme_attach_controller" 00:36:14.809 } 00:36:14.809 EOF 00:36:14.809 )") 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:14.809 "params": { 00:36:14.809 "name": "Nvme0", 00:36:14.809 "trtype": "tcp", 00:36:14.809 "traddr": "10.0.0.2", 00:36:14.809 "adrfam": "ipv4", 00:36:14.809 "trsvcid": "4420", 00:36:14.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:14.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:14.809 "hdgst": false, 00:36:14.809 "ddgst": false 00:36:14.809 }, 00:36:14.809 "method": "bdev_nvme_attach_controller" 00:36:14.809 },{ 00:36:14.809 "params": { 00:36:14.809 "name": "Nvme1", 00:36:14.809 "trtype": "tcp", 00:36:14.809 "traddr": "10.0.0.2", 00:36:14.809 "adrfam": "ipv4", 00:36:14.809 "trsvcid": "4420", 00:36:14.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:14.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:14.809 "hdgst": false, 00:36:14.809 "ddgst": false 00:36:14.809 }, 00:36:14.809 "method": "bdev_nvme_attach_controller" 00:36:14.809 }' 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:14.809 13:32:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:14.809 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:14.809 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:14.809 fio-3.35 00:36:14.809 Starting 2 threads 00:36:24.818 00:36:24.818 filename0: (groupid=0, jobs=1): err= 0: pid=2033169: Wed Nov 6 13:33:06 2024 00:36:24.818 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:36:24.818 slat (nsec): min=5473, max=37762, avg=7058.36, stdev=2259.80 00:36:24.818 clat (usec): min=664, max=43663, avg=21079.66, stdev=20157.70 00:36:24.818 lat (usec): min=674, max=43695, avg=21086.72, stdev=20157.54 00:36:24.818 clat percentiles (usec): 00:36:24.818 | 1.00th=[ 734], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 832], 00:36:24.818 | 30.00th=[ 848], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:36:24.818 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:24.818 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:36:24.818 | 99.99th=[43779] 00:36:24.818 bw ( KiB/s): min= 672, max= 768, per=66.23%, avg=759.58, stdev=25.78, samples=19 00:36:24.818 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:36:24.818 lat (usec) : 750=1.74%, 1000=46.94% 00:36:24.818 lat (msec) : 2=1.11%, 50=50.21% 00:36:24.818 cpu : usr=95.24%, sys=4.52%, ctx=29, majf=0, minf=149 00:36:24.818 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:24.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.818 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.818 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:24.818 filename1: (groupid=0, jobs=1): err= 0: pid=2033170: Wed Nov 6 13:33:06 2024 00:36:24.818 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10025msec) 00:36:24.818 slat (nsec): min=5435, max=38061, avg=6680.64, stdev=2659.91 00:36:24.818 clat (usec): min=40806, max=42728, avg=41067.03, stdev=288.07 00:36:24.818 lat (usec): min=40820, max=42766, avg=41073.71, stdev=289.05 00:36:24.818 clat percentiles (usec): 00:36:24.818 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:24.818 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:24.818 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:24.818 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:24.818 | 99.99th=[42730] 00:36:24.818 bw ( KiB/s): min= 384, max= 416, per=33.86%, avg=388.80, stdev=11.72, samples=20 00:36:24.818 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:24.818 lat (msec) : 50=100.00% 00:36:24.818 cpu : usr=95.47%, sys=4.33%, ctx=14, majf=0, minf=156 00:36:24.818 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:24.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:24.818 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:24.818 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:24.818 00:36:24.818 Run status group 0 (all jobs): 00:36:24.818 READ: bw=1146KiB/s (1173kB/s), 389KiB/s-758KiB/s (399kB/s-777kB/s), io=11.2MiB (11.8MB), run=10001-10025msec 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.079 00:36:25.079 real 0m11.605s 00:36:25.079 user 0m36.586s 00:36:25.079 sys 0m1.257s 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:25.079 13:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:25.079 ************************************ 00:36:25.079 END TEST fio_dif_1_multi_subsystems 00:36:25.079 ************************************ 00:36:25.079 13:33:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:25.079 13:33:06 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:25.079 13:33:06 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:25.079 13:33:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:25.079 ************************************ 00:36:25.079 START TEST fio_dif_rand_params 00:36:25.079 ************************************ 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.079 bdev_null0 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.079 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.341 [2024-11-06 13:33:06.990158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:25.341 { 00:36:25.341 "params": { 00:36:25.341 "name": "Nvme$subsystem", 00:36:25.341 "trtype": "$TEST_TRANSPORT", 00:36:25.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:25.341 "adrfam": "ipv4", 00:36:25.341 "trsvcid": "$NVMF_PORT", 00:36:25.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:25.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:25.341 "hdgst": ${hdgst:-false}, 00:36:25.341 "ddgst": ${ddgst:-false} 00:36:25.341 }, 00:36:25.341 "method": "bdev_nvme_attach_controller" 00:36:25.341 } 00:36:25.341 EOF 00:36:25.341 )") 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:25.341 13:33:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:25.341 "params": { 00:36:25.341 "name": "Nvme0", 00:36:25.341 "trtype": "tcp", 00:36:25.341 "traddr": "10.0.0.2", 00:36:25.341 "adrfam": "ipv4", 00:36:25.341 "trsvcid": "4420", 00:36:25.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:25.341 "hdgst": false, 00:36:25.341 "ddgst": false 00:36:25.341 }, 00:36:25.341 "method": "bdev_nvme_attach_controller" 00:36:25.341 }' 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:25.341 13:33:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.602 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:25.602 ... 00:36:25.602 fio-3.35 00:36:25.602 Starting 3 threads 00:36:32.188 00:36:32.188 filename0: (groupid=0, jobs=1): err= 0: pid=2035639: Wed Nov 6 13:33:13 2024 00:36:32.188 read: IOPS=376, BW=47.0MiB/s (49.3MB/s)(237MiB/5045msec) 00:36:32.188 slat (nsec): min=5481, max=31773, avg=7994.60, stdev=1615.18 00:36:32.188 clat (usec): min=3657, max=86855, avg=7946.61, stdev=5533.72 00:36:32.188 lat (usec): min=3665, max=86864, avg=7954.60, stdev=5533.91 00:36:32.188 clat percentiles (usec): 00:36:32.188 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6390], 00:36:32.188 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7570], 00:36:32.188 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8979], 00:36:32.188 | 99.00th=[45876], 99.50th=[46924], 99.90th=[50070], 99.95th=[86508], 00:36:32.188 | 99.99th=[86508] 00:36:32.188 bw ( KiB/s): min=30464, max=56320, per=39.36%, avg=48486.40, stdev=8021.51, samples=10 00:36:32.188 iops : min= 238, max= 440, avg=378.80, stdev=62.67, samples=10 00:36:32.188 lat (msec) : 4=0.47%, 10=97.42%, 20=0.32%, 50=1.69%, 100=0.11% 00:36:32.188 cpu : usr=94.05%, sys=5.69%, ctx=32, majf=0, minf=81 00:36:32.188 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.188 issued rwts: total=1897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.188 filename0: (groupid=0, jobs=1): err= 0: pid=2035640: Wed Nov 6 13:33:13 2024 00:36:32.188 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(176MiB/5045msec) 00:36:32.188 slat (nsec): min=5518, max=32126, avg=8168.69, stdev=2047.85 00:36:32.188 clat (usec): min=4342, max=90223, avg=10732.05, stdev=8197.17 00:36:32.188 lat (usec): min=4351, max=90229, avg=10740.22, stdev=8197.12 00:36:32.188 clat percentiles (usec): 00:36:32.188 | 1.00th=[ 5473], 5.00th=[ 7046], 10.00th=[ 7635], 20.00th=[ 8225], 00:36:32.188 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:36:32.188 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11207], 95.00th=[11731], 00:36:32.188 | 99.00th=[49021], 99.50th=[49546], 99.90th=[89654], 99.95th=[90702], 00:36:32.188 | 99.99th=[90702] 00:36:32.188 bw ( KiB/s): min=21760, max=41472, per=29.14%, avg=35891.20, stdev=7192.14, samples=10 00:36:32.188 iops : min= 170, max= 324, avg=280.40, stdev=56.19, samples=10 00:36:32.188 lat (msec) : 10=62.92%, 20=33.95%, 50=2.63%, 100=0.50% 00:36:32.188 cpu : usr=94.85%, sys=4.90%, ctx=6, majf=0, minf=71 00:36:32.188 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.188 issued rwts: total=1405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.188 filename0: (groupid=0, jobs=1): err= 0: pid=2035641: Wed Nov 6 13:33:13 2024 00:36:32.188 read: IOPS=307, BW=38.5MiB/s (40.3MB/s)(194MiB/5045msec) 00:36:32.188 slat (nsec): min=5568, max=33305, avg=8110.95, stdev=1786.56 00:36:32.188 clat (usec): min=4036, max=87510, avg=9707.22, stdev=5606.96 00:36:32.188 lat (usec): min=4045, max=87516, avg=9715.34, stdev=5607.00 00:36:32.188 clat percentiles (usec): 00:36:32.188 | 1.00th=[ 4621], 5.00th=[ 5932], 10.00th=[ 6980], 20.00th=[ 7767], 00:36:32.188 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9634], 00:36:32.188 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10814], 95.00th=[11338], 00:36:32.188 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49546], 99.95th=[87557], 00:36:32.188 | 99.99th=[87557] 00:36:32.188 bw ( KiB/s): min=27904, max=44032, per=32.23%, avg=39705.60, stdev=4375.28, samples=10 00:36:32.188 iops : min= 218, max= 344, avg=310.20, stdev=34.18, samples=10 00:36:32.188 lat (msec) : 10=71.86%, 20=26.34%, 50=1.74%, 100=0.06% 00:36:32.188 cpu : usr=94.67%, sys=5.08%, ctx=7, majf=0, minf=113 00:36:32.188 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.188 issued rwts: total=1553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.188 00:36:32.188 Run status group 0 (all jobs): 00:36:32.188 READ: bw=120MiB/s (126MB/s), 34.8MiB/s-47.0MiB/s (36.5MB/s-49.3MB/s), io=607MiB (636MB), run=5045-5045msec 00:36:32.188 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:32.188 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 bdev_null0 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 [2024-11-06 13:33:13.271180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 bdev_null1 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 bdev_null2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.189 { 00:36:32.189 "params": { 00:36:32.189 "name": "Nvme$subsystem", 00:36:32.189 "trtype": "$TEST_TRANSPORT", 00:36:32.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.189 "adrfam": "ipv4", 00:36:32.189 "trsvcid": "$NVMF_PORT", 00:36:32.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.189 "hdgst": ${hdgst:-false}, 00:36:32.189 "ddgst": ${ddgst:-false} 00:36:32.189 }, 00:36:32.189 "method": "bdev_nvme_attach_controller" 00:36:32.189 } 00:36:32.189 EOF 00:36:32.189 )") 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.189 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.189 { 00:36:32.189 "params": { 00:36:32.189 "name": "Nvme$subsystem", 00:36:32.189 "trtype": "$TEST_TRANSPORT", 00:36:32.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.189 "adrfam": "ipv4", 00:36:32.189 "trsvcid": "$NVMF_PORT", 00:36:32.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.190 "hdgst": ${hdgst:-false}, 00:36:32.190 "ddgst": ${ddgst:-false} 00:36:32.190 }, 00:36:32.190 "method": "bdev_nvme_attach_controller" 00:36:32.190 } 00:36:32.190 EOF 00:36:32.190 )") 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:32.190 { 00:36:32.190 "params": { 00:36:32.190 "name": "Nvme$subsystem", 00:36:32.190 "trtype": "$TEST_TRANSPORT", 00:36:32.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:32.190 "adrfam": "ipv4", 00:36:32.190 "trsvcid": "$NVMF_PORT", 00:36:32.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:32.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:32.190 "hdgst": ${hdgst:-false}, 00:36:32.190 "ddgst": ${ddgst:-false} 00:36:32.190 }, 00:36:32.190 "method": "bdev_nvme_attach_controller" 00:36:32.190 } 00:36:32.190 EOF 00:36:32.190 )") 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:32.190 "params": { 00:36:32.190 "name": "Nvme0", 00:36:32.190 "trtype": "tcp", 00:36:32.190 "traddr": "10.0.0.2", 00:36:32.190 "adrfam": "ipv4", 00:36:32.190 "trsvcid": "4420", 00:36:32.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:32.190 "hdgst": false, 00:36:32.190 "ddgst": false 00:36:32.190 }, 00:36:32.190 "method": "bdev_nvme_attach_controller" 00:36:32.190 },{ 00:36:32.190 "params": { 00:36:32.190 "name": "Nvme1", 00:36:32.190 "trtype": "tcp", 00:36:32.190 "traddr": "10.0.0.2", 00:36:32.190 "adrfam": "ipv4", 00:36:32.190 "trsvcid": "4420", 00:36:32.190 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:32.190 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:32.190 "hdgst": false, 00:36:32.190 "ddgst": false 00:36:32.190 }, 00:36:32.190 "method": "bdev_nvme_attach_controller" 00:36:32.190 },{ 00:36:32.190 "params": { 00:36:32.190 "name": "Nvme2", 00:36:32.190 "trtype": "tcp", 00:36:32.190 "traddr": "10.0.0.2", 00:36:32.190 "adrfam": "ipv4", 00:36:32.190 "trsvcid": "4420", 00:36:32.190 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:32.190 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:32.190 "hdgst": false, 00:36:32.190 "ddgst": false 00:36:32.190 }, 00:36:32.190 "method": "bdev_nvme_attach_controller" 00:36:32.190 }' 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:32.190 13:33:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:32.190 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:32.190 ... 00:36:32.190 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:32.190 ... 00:36:32.190 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:32.190 ... 00:36:32.190 fio-3.35 00:36:32.190 Starting 24 threads 00:36:44.422 00:36:44.422 filename0: (groupid=0, jobs=1): err= 0: pid=2037488: Wed Nov 6 13:33:24 2024 00:36:44.422 read: IOPS=977, BW=3908KiB/s (4002kB/s)(38.2MiB/10020msec) 00:36:44.423 slat (nsec): min=5607, max=71643, avg=6505.43, stdev=2384.44 00:36:44.423 clat (usec): min=1498, max=32905, avg=16334.39, stdev=2786.13 00:36:44.423 lat (usec): min=1522, max=32912, avg=16340.89, stdev=2784.98 00:36:44.423 clat percentiles (usec): 00:36:44.423 | 1.00th=[ 2966], 5.00th=[13960], 10.00th=[14222], 20.00th=[14484], 00:36:44.423 | 30.00th=[15008], 40.00th=[16188], 50.00th=[16450], 60.00th=[16909], 00:36:44.423 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[19792], 00:36:44.423 | 99.00th=[24249], 99.50th=[24249], 99.90th=[24773], 99.95th=[31065], 00:36:44.423 | 99.99th=[32900] 00:36:44.423 bw ( KiB/s): min= 3472, max= 4080, per=6.06%, avg=3910.40, stdev=116.76, samples=20 00:36:44.423 iops : min= 868, max= 1020, avg=977.60, stdev=29.19, samples=20 00:36:44.423 lat (msec) : 2=0.20%, 4=1.18%, 10=0.61%, 20=93.34%, 50=4.66% 00:36:44.423 cpu : usr=98.94%, sys=0.80%, ctx=13, majf=0, minf=164 00:36:44.423 IO depths : 1=0.2%, 2=0.5%, 4=7.2%, 8=79.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:36:44.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 complete : 0=0.0%, 4=89.1%, 8=5.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 issued rwts: total=9790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.423 filename0: (groupid=0, jobs=1): err= 0: pid=2037489: Wed Nov 6 13:33:24 2024 00:36:44.423 read: IOPS=661, BW=2646KiB/s (2710kB/s)(25.9MiB/10013msec) 00:36:44.423 slat (nsec): min=5640, max=99984, avg=9838.93, stdev=7863.40 00:36:44.423 clat (usec): min=5330, max=25812, avg=24102.48, stdev=1644.32 00:36:44.423 lat (usec): min=5352, max=25819, avg=24112.32, stdev=1641.88 00:36:44.423 clat percentiles (usec): 00:36:44.423 | 1.00th=[14615], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.423 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.423 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:36:44.423 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:36:44.423 | 99.99th=[25822] 00:36:44.423 bw ( KiB/s): min= 2554, max= 2944, per=4.10%, avg=2645.68, stdev=96.00, samples=19 00:36:44.423 iops : min= 638, max= 736, avg=661.26, stdev=24.01, samples=19 00:36:44.423 lat (msec) : 10=0.51%, 20=0.94%, 50=98.55% 00:36:44.423 cpu : usr=98.84%, sys=0.86%, ctx=48, majf=0, minf=76 00:36:44.423 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.423 filename0: (groupid=0, jobs=1): err= 0: pid=2037490: Wed Nov 6 13:33:24 2024 00:36:44.423 read: IOPS=645, BW=2583KiB/s (2645kB/s)(25.2MiB/10010msec) 00:36:44.423 slat (nsec): min=5633, max=83068, avg=17732.68, stdev=12819.29 00:36:44.423 clat (usec): min=9968, max=43698, avg=24630.75, stdev=3837.56 00:36:44.423 lat (usec): min=9975, max=43716, avg=24648.48, stdev=3839.48 00:36:44.423 clat percentiles (usec): 00:36:44.423 | 1.00th=[13829], 5.00th=[18744], 10.00th=[20579], 20.00th=[23987], 00:36:44.423 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.423 | 70.00th=[24511], 80.00th=[24773], 90.00th=[29492], 95.00th=[33162], 00:36:44.423 | 99.00th=[36439], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:36:44.423 | 99.99th=[43779] 00:36:44.423 bw ( KiB/s): min= 2128, max= 2746, per=4.00%, avg=2580.68, stdev=175.73, samples=19 00:36:44.423 iops : min= 532, max= 686, avg=645.05, stdev=43.94, samples=19 00:36:44.423 lat (msec) : 10=0.06%, 20=8.45%, 50=91.49% 00:36:44.423 cpu : usr=98.93%, sys=0.80%, ctx=15, majf=0, minf=55 00:36:44.423 IO depths : 1=2.8%, 2=6.5%, 4=15.3%, 8=64.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:44.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 complete : 0=0.0%, 4=91.6%, 8=3.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 issued rwts: total=6464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.423 filename0: (groupid=0, jobs=1): err= 0: pid=2037491: Wed Nov 6 13:33:24 2024 00:36:44.423 read: IOPS=662, BW=2649KiB/s (2712kB/s)(25.9MiB/10009msec) 00:36:44.423 slat (nsec): min=5629, max=52855, avg=10321.53, stdev=7092.66 00:36:44.423 clat (usec): min=10260, max=39942, avg=24086.21, stdev=2685.96 00:36:44.423 lat (usec): min=10266, max=39980, avg=24096.53, stdev=2686.76 00:36:44.423 clat percentiles (usec): 00:36:44.423 | 1.00th=[14746], 5.00th=[19006], 10.00th=[23200], 20.00th=[23987], 00:36:44.423 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.423 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25822], 00:36:44.423 | 99.00th=[34341], 99.50th=[34866], 99.90th=[39584], 99.95th=[40109], 00:36:44.423 | 99.99th=[40109] 00:36:44.423 bw ( KiB/s): min= 2554, max= 2832, per=4.10%, avg=2648.26, stdev=62.69, samples=19 00:36:44.423 iops : min= 638, max= 708, avg=661.95, stdev=15.74, samples=19 00:36:44.423 lat (msec) : 20=6.10%, 50=93.90% 00:36:44.423 cpu : usr=98.93%, sys=0.80%, ctx=39, majf=0, minf=69 00:36:44.423 IO depths : 1=0.9%, 2=5.2%, 4=18.7%, 8=62.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:36:44.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 issued rwts: total=6628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.423 filename0: (groupid=0, jobs=1): err= 0: pid=2037492: Wed Nov 6 13:33:24 2024 00:36:44.423 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10006msec) 00:36:44.423 slat (nsec): min=5702, max=60519, avg=11452.41, stdev=7627.76 00:36:44.423 clat (usec): min=10532, max=30243, avg=24254.08, stdev=755.10 00:36:44.423 lat (usec): min=10544, max=30250, avg=24265.53, stdev=753.93 00:36:44.423 clat percentiles (usec): 00:36:44.423 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23987], 20.00th=[23987], 00:36:44.423 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.423 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.423 | 99.00th=[25560], 99.50th=[25822], 99.90th=[28967], 99.95th=[30016], 00:36:44.423 | 99.99th=[30278] 00:36:44.423 bw ( KiB/s): min= 2560, max= 2688, per=4.07%, avg=2626.37, stdev=64.19, samples=19 00:36:44.423 iops : min= 640, max= 672, avg=656.47, stdev=15.97, samples=19 00:36:44.423 lat (msec) : 20=0.33%, 50=99.67% 00:36:44.423 cpu : usr=99.07%, sys=0.67%, ctx=13, majf=0, minf=68 00:36:44.423 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.423 filename0: (groupid=0, jobs=1): err= 0: pid=2037493: Wed Nov 6 13:33:24 2024 00:36:44.423 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10007msec) 00:36:44.423 slat (nsec): min=5640, max=71207, avg=20132.78, stdev=11912.39 00:36:44.423 clat (usec): min=11939, max=40978, avg=24179.53, stdev=909.36 00:36:44.423 lat (usec): min=11946, max=41007, avg=24199.66, stdev=909.30 00:36:44.423 clat percentiles (usec): 00:36:44.423 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.423 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.423 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24511], 95.00th=[25035], 00:36:44.423 | 99.00th=[25297], 99.50th=[25560], 99.90th=[32375], 99.95th=[32375], 00:36:44.423 | 99.99th=[41157] 00:36:44.423 bw ( KiB/s): min= 2554, max= 2688, per=4.07%, avg=2626.11, stdev=65.78, samples=19 00:36:44.423 iops : min= 638, max= 672, avg=656.42, stdev=16.46, samples=19 00:36:44.423 lat (msec) : 20=0.52%, 50=99.48% 00:36:44.423 cpu : usr=98.51%, sys=1.04%, ctx=79, majf=0, minf=44 00:36:44.423 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.423 filename0: (groupid=0, jobs=1): err= 0: pid=2037495: Wed Nov 6 13:33:24 2024 00:36:44.423 read: IOPS=657, BW=2630KiB/s (2693kB/s)(25.7MiB/10003msec) 00:36:44.423 slat (nsec): min=5621, max=82122, avg=23063.65, stdev=13389.93 00:36:44.423 clat (usec): min=4986, max=37737, avg=24115.79, stdev=1293.73 00:36:44.423 lat (usec): min=4992, max=37769, avg=24138.85, stdev=1294.18 00:36:44.423 clat percentiles (usec): 00:36:44.423 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.423 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:44.423 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24511], 95.00th=[24773], 00:36:44.423 | 99.00th=[25297], 99.50th=[25560], 99.90th=[37487], 99.95th=[37487], 00:36:44.423 | 99.99th=[37487] 00:36:44.423 bw ( KiB/s): min= 2554, max= 2688, per=4.06%, avg=2618.74, stdev=64.97, samples=19 00:36:44.423 iops : min= 638, max= 672, avg=654.53, stdev=16.19, samples=19 00:36:44.423 lat (msec) : 10=0.24%, 20=0.24%, 50=99.51% 00:36:44.423 cpu : usr=98.43%, sys=1.05%, ctx=132, majf=0, minf=46 00:36:44.423 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.423 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.423 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.423 filename0: (groupid=0, jobs=1): err= 0: pid=2037496: Wed Nov 6 13:33:24 2024 00:36:44.423 read: IOPS=688, BW=2753KiB/s (2819kB/s)(26.9MiB/10003msec) 00:36:44.423 slat (nsec): min=5629, max=64447, avg=9148.17, stdev=5573.41 00:36:44.423 clat (usec): min=5620, max=35022, avg=23171.71, stdev=3172.65 00:36:44.423 lat (usec): min=5647, max=35040, avg=23180.86, stdev=3172.88 00:36:44.423 clat percentiles (usec): 00:36:44.423 | 1.00th=[11338], 5.00th=[15795], 10.00th=[17433], 20.00th=[23725], 00:36:44.423 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.423 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24511], 95.00th=[24773], 00:36:44.423 | 99.00th=[27919], 99.50th=[32113], 99.90th=[34341], 99.95th=[34866], 00:36:44.423 | 99.99th=[34866] 00:36:44.423 bw ( KiB/s): min= 2554, max= 3720, per=4.27%, avg=2755.37, stdev=307.22, samples=19 00:36:44.423 iops : min= 638, max= 930, avg=688.68, stdev=76.83, samples=19 00:36:44.423 lat (msec) : 10=0.49%, 20=13.02%, 50=86.49% 00:36:44.424 cpu : usr=98.44%, sys=1.03%, ctx=139, majf=0, minf=58 00:36:44.424 IO depths : 1=5.1%, 2=10.4%, 4=22.1%, 8=54.9%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:44.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 issued rwts: total=6884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.424 filename1: (groupid=0, jobs=1): err= 0: pid=2037497: Wed Nov 6 13:33:24 2024 00:36:44.424 read: IOPS=656, BW=2628KiB/s (2691kB/s)(25.7MiB/10011msec) 00:36:44.424 slat (nsec): min=5628, max=62752, avg=12051.66, stdev=7825.54 00:36:44.424 clat (usec): min=12402, max=40313, avg=24253.95, stdev=1168.91 00:36:44.424 lat (usec): min=12409, max=40347, avg=24266.01, stdev=1168.70 00:36:44.424 clat percentiles (usec): 00:36:44.424 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.424 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.424 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.424 | 99.00th=[25560], 99.50th=[25822], 99.90th=[40109], 99.95th=[40109], 00:36:44.424 | 99.99th=[40109] 00:36:44.424 bw ( KiB/s): min= 2554, max= 2688, per=4.06%, avg=2619.63, stdev=64.74, samples=19 00:36:44.424 iops : min= 638, max= 672, avg=654.79, stdev=16.15, samples=19 00:36:44.424 lat (msec) : 20=0.52%, 50=99.48% 00:36:44.424 cpu : usr=98.77%, sys=0.78%, ctx=78, majf=0, minf=46 00:36:44.424 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.424 filename1: (groupid=0, jobs=1): err= 0: pid=2037498: Wed Nov 6 13:33:24 2024 00:36:44.424 read: IOPS=660, BW=2643KiB/s (2706kB/s)(25.8MiB/10002msec) 00:36:44.424 slat (nsec): min=5579, max=60453, avg=11450.89, stdev=6443.46 00:36:44.424 clat (usec): min=1849, max=46571, avg=24110.83, stdev=2275.70 00:36:44.424 lat (usec): min=1855, max=46595, avg=24122.28, stdev=2276.05 00:36:44.424 clat percentiles (usec): 00:36:44.424 | 1.00th=[15533], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.424 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.424 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.424 | 99.00th=[25560], 99.50th=[25560], 99.90th=[46400], 99.95th=[46400], 00:36:44.424 | 99.99th=[46400] 00:36:44.424 bw ( KiB/s): min= 2554, max= 2688, per=4.06%, avg=2619.63, stdev=66.05, samples=19 00:36:44.424 iops : min= 638, max= 672, avg=654.79, stdev=16.58, samples=19 00:36:44.424 lat (msec) : 2=0.03%, 4=0.45%, 10=0.24%, 20=0.48%, 50=98.79% 00:36:44.424 cpu : usr=97.42%, sys=1.62%, ctx=1050, majf=0, minf=53 00:36:44.424 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.424 filename1: (groupid=0, jobs=1): err= 0: pid=2037499: Wed Nov 6 13:33:24 2024 00:36:44.424 read: IOPS=658, BW=2634KiB/s (2698kB/s)(25.8MiB/10009msec) 00:36:44.424 slat (nsec): min=5631, max=80037, avg=14709.82, stdev=11497.29 00:36:44.424 clat (usec): min=10253, max=29449, avg=24168.82, stdev=1004.70 00:36:44.424 lat (usec): min=10263, max=29461, avg=24183.53, stdev=1003.19 00:36:44.424 clat percentiles (usec): 00:36:44.424 | 1.00th=[20579], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.424 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.424 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:36:44.424 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25822], 99.95th=[27919], 00:36:44.424 | 99.99th=[29492] 00:36:44.424 bw ( KiB/s): min= 2554, max= 2816, per=4.08%, avg=2632.21, stdev=76.90, samples=19 00:36:44.424 iops : min= 638, max= 704, avg=657.89, stdev=19.17, samples=19 00:36:44.424 lat (msec) : 20=0.76%, 50=99.24% 00:36:44.424 cpu : usr=98.73%, sys=0.95%, ctx=52, majf=0, minf=40 00:36:44.424 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.424 filename1: (groupid=0, jobs=1): err= 0: pid=2037500: Wed Nov 6 13:33:24 2024 00:36:44.424 read: IOPS=658, BW=2634KiB/s (2697kB/s)(25.7MiB/10005msec) 00:36:44.424 slat (nsec): min=5726, max=99602, avg=23710.22, stdev=13712.82 00:36:44.424 clat (usec): min=5564, max=49860, avg=24095.89, stdev=1561.96 00:36:44.424 lat (usec): min=5570, max=49888, avg=24119.60, stdev=1562.47 00:36:44.424 clat percentiles (usec): 00:36:44.424 | 1.00th=[17171], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.424 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:44.424 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.424 | 99.00th=[26084], 99.50th=[28705], 99.90th=[35390], 99.95th=[35390], 00:36:44.424 | 99.99th=[50070] 00:36:44.424 bw ( KiB/s): min= 2544, max= 2784, per=4.07%, avg=2624.37, stdev=75.17, samples=19 00:36:44.424 iops : min= 636, max= 696, avg=655.95, stdev=18.89, samples=19 00:36:44.424 lat (msec) : 10=0.24%, 20=1.24%, 50=98.51% 00:36:44.424 cpu : usr=98.50%, sys=1.03%, ctx=125, majf=0, minf=41 00:36:44.424 IO depths : 1=5.0%, 2=11.0%, 4=24.2%, 8=52.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:44.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 issued rwts: total=6588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.424 filename1: (groupid=0, jobs=1): err= 0: pid=2037501: Wed Nov 6 13:33:24 2024 00:36:44.424 read: IOPS=658, BW=2634KiB/s (2697kB/s)(25.8MiB/10012msec) 00:36:44.424 slat (nsec): min=5684, max=99678, avg=21927.01, stdev=12349.73 00:36:44.424 clat (usec): min=8915, max=30477, avg=24115.50, stdev=992.33 00:36:44.424 lat (usec): min=8924, max=30490, avg=24137.42, stdev=991.85 00:36:44.424 clat percentiles (usec): 00:36:44.424 | 1.00th=[20579], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.424 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.424 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:36:44.424 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25822], 99.95th=[30278], 00:36:44.424 | 99.99th=[30540] 00:36:44.424 bw ( KiB/s): min= 2554, max= 2816, per=4.08%, avg=2633.16, stdev=77.58, samples=19 00:36:44.424 iops : min= 638, max= 704, avg=658.21, stdev=19.39, samples=19 00:36:44.424 lat (msec) : 10=0.05%, 20=0.86%, 50=99.09% 00:36:44.424 cpu : usr=98.03%, sys=1.28%, ctx=207, majf=0, minf=65 00:36:44.424 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.424 filename1: (groupid=0, jobs=1): err= 0: pid=2037502: Wed Nov 6 13:33:24 2024 00:36:44.424 read: IOPS=657, BW=2630KiB/s (2693kB/s)(25.7MiB/10002msec) 00:36:44.424 slat (nsec): min=5666, max=78556, avg=21481.95, stdev=12403.90 00:36:44.424 clat (usec): min=6683, max=43984, avg=24135.69, stdev=1259.56 00:36:44.424 lat (usec): min=6689, max=44015, avg=24157.18, stdev=1259.78 00:36:44.424 clat percentiles (usec): 00:36:44.424 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.424 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:44.424 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:36:44.424 | 99.00th=[25297], 99.50th=[25560], 99.90th=[35390], 99.95th=[35390], 00:36:44.424 | 99.99th=[43779] 00:36:44.424 bw ( KiB/s): min= 2554, max= 2688, per=4.06%, avg=2619.32, stdev=64.41, samples=19 00:36:44.424 iops : min= 638, max= 672, avg=654.68, stdev=16.03, samples=19 00:36:44.424 lat (msec) : 10=0.24%, 20=0.27%, 50=99.48% 00:36:44.424 cpu : usr=99.02%, sys=0.70%, ctx=25, majf=0, minf=49 00:36:44.424 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.424 filename1: (groupid=0, jobs=1): err= 0: pid=2037503: Wed Nov 6 13:33:24 2024 00:36:44.424 read: IOPS=656, BW=2627KiB/s (2690kB/s)(25.7MiB/10014msec) 00:36:44.424 slat (nsec): min=5665, max=76071, avg=21920.13, stdev=12168.41 00:36:44.424 clat (usec): min=16561, max=29651, avg=24148.49, stdev=648.50 00:36:44.424 lat (usec): min=16586, max=29685, avg=24170.41, stdev=648.36 00:36:44.424 clat percentiles (usec): 00:36:44.424 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.424 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:44.424 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.424 | 99.00th=[25297], 99.50th=[25822], 99.90th=[29492], 99.95th=[29492], 00:36:44.424 | 99.99th=[29754] 00:36:44.424 bw ( KiB/s): min= 2554, max= 2693, per=4.07%, avg=2626.37, stdev=66.05, samples=19 00:36:44.424 iops : min= 638, max= 673, avg=656.47, stdev=16.51, samples=19 00:36:44.424 lat (msec) : 20=0.46%, 50=99.54% 00:36:44.424 cpu : usr=98.27%, sys=1.18%, ctx=176, majf=0, minf=58 00:36:44.424 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:44.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.424 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.424 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.424 filename1: (groupid=0, jobs=1): err= 0: pid=2037505: Wed Nov 6 13:33:24 2024 00:36:44.424 read: IOPS=660, BW=2642KiB/s (2706kB/s)(25.8MiB/10004msec) 00:36:44.424 slat (usec): min=5, max=116, avg=19.11, stdev=12.09 00:36:44.424 clat (usec): min=5569, max=25988, avg=24060.61, stdev=1511.20 00:36:44.424 lat (usec): min=5592, max=25996, avg=24079.72, stdev=1509.44 00:36:44.424 clat percentiles (usec): 00:36:44.424 | 1.00th=[19792], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.425 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:44.425 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.425 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[26084], 00:36:44.425 | 99.99th=[26084] 00:36:44.425 bw ( KiB/s): min= 2554, max= 2944, per=4.09%, avg=2638.95, stdev=98.65, samples=19 00:36:44.425 iops : min= 638, max= 736, avg=659.58, stdev=24.77, samples=19 00:36:44.425 lat (msec) : 10=0.58%, 20=0.64%, 50=98.79% 00:36:44.425 cpu : usr=98.60%, sys=0.98%, ctx=91, majf=0, minf=71 00:36:44.425 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.425 filename2: (groupid=0, jobs=1): err= 0: pid=2037506: Wed Nov 6 13:33:24 2024 00:36:44.425 read: IOPS=657, BW=2630KiB/s (2693kB/s)(25.7MiB/10003msec) 00:36:44.425 slat (usec): min=5, max=434, avg=21.65, stdev=14.44 00:36:44.425 clat (usec): min=16331, max=25972, avg=24137.03, stdev=600.44 00:36:44.425 lat (usec): min=16349, max=25996, avg=24158.68, stdev=600.84 00:36:44.425 clat percentiles (usec): 00:36:44.425 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.425 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:44.425 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:36:44.425 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25822], 99.95th=[25822], 00:36:44.425 | 99.99th=[26084] 00:36:44.425 bw ( KiB/s): min= 2554, max= 2688, per=4.07%, avg=2625.47, stdev=65.83, samples=19 00:36:44.425 iops : min= 638, max= 672, avg=656.21, stdev=16.48, samples=19 00:36:44.425 lat (msec) : 20=0.49%, 50=99.51% 00:36:44.425 cpu : usr=98.96%, sys=0.73%, ctx=52, majf=0, minf=57 00:36:44.425 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:44.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.425 filename2: (groupid=0, jobs=1): err= 0: pid=2037507: Wed Nov 6 13:33:24 2024 00:36:44.425 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10015msec) 00:36:44.425 slat (nsec): min=5642, max=97059, avg=11015.96, stdev=9412.43 00:36:44.425 clat (usec): min=1729, max=32151, avg=23869.15, stdev=2745.08 00:36:44.425 lat (usec): min=1747, max=32158, avg=23880.17, stdev=2743.86 00:36:44.425 clat percentiles (usec): 00:36:44.425 | 1.00th=[ 5800], 5.00th=[23200], 10.00th=[23725], 20.00th=[23987], 00:36:44.425 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.425 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.425 | 99.00th=[25560], 99.50th=[25822], 99.90th=[32113], 99.95th=[32113], 00:36:44.425 | 99.99th=[32113] 00:36:44.425 bw ( KiB/s): min= 2554, max= 3456, per=4.14%, avg=2672.95, stdev=199.98, samples=19 00:36:44.425 iops : min= 638, max= 864, avg=668.11, stdev=50.02, samples=19 00:36:44.425 lat (msec) : 2=0.09%, 4=0.60%, 10=0.84%, 20=1.38%, 50=97.10% 00:36:44.425 cpu : usr=99.00%, sys=0.73%, ctx=8, majf=0, minf=51 00:36:44.425 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:44.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.425 filename2: (groupid=0, jobs=1): err= 0: pid=2037508: Wed Nov 6 13:33:24 2024 00:36:44.425 read: IOPS=659, BW=2636KiB/s (2700kB/s)(25.8MiB/10002msec) 00:36:44.425 slat (nsec): min=5524, max=93773, avg=22500.99, stdev=12246.66 00:36:44.425 clat (usec): min=1818, max=46568, avg=24068.31, stdev=2186.72 00:36:44.425 lat (usec): min=1824, max=46594, avg=24090.81, stdev=2187.69 00:36:44.425 clat percentiles (usec): 00:36:44.425 | 1.00th=[19268], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.425 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:44.425 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24511], 95.00th=[24773], 00:36:44.425 | 99.00th=[25560], 99.50th=[29492], 99.90th=[46400], 99.95th=[46400], 00:36:44.425 | 99.99th=[46400] 00:36:44.425 bw ( KiB/s): min= 2554, max= 2688, per=4.05%, avg=2612.89, stdev=64.55, samples=19 00:36:44.425 iops : min= 638, max= 672, avg=653.11, stdev=16.14, samples=19 00:36:44.425 lat (msec) : 2=0.03%, 4=0.42%, 10=0.27%, 20=0.49%, 50=98.79% 00:36:44.425 cpu : usr=98.78%, sys=0.90%, ctx=52, majf=0, minf=49 00:36:44.425 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:44.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.425 filename2: (groupid=0, jobs=1): err= 0: pid=2037509: Wed Nov 6 13:33:24 2024 00:36:44.425 read: IOPS=657, BW=2630KiB/s (2693kB/s)(25.7MiB/10002msec) 00:36:44.425 slat (nsec): min=5625, max=75093, avg=17062.44, stdev=12161.54 00:36:44.425 clat (usec): min=16479, max=25885, avg=24195.16, stdev=615.95 00:36:44.425 lat (usec): min=16491, max=25892, avg=24212.23, stdev=613.78 00:36:44.425 clat percentiles (usec): 00:36:44.425 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.425 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.425 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.425 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:36:44.425 | 99.99th=[25822] 00:36:44.425 bw ( KiB/s): min= 2554, max= 2688, per=4.07%, avg=2625.79, stdev=66.13, samples=19 00:36:44.425 iops : min= 638, max= 672, avg=656.32, stdev=16.58, samples=19 00:36:44.425 lat (msec) : 20=0.49%, 50=99.51% 00:36:44.425 cpu : usr=98.01%, sys=1.27%, ctx=290, majf=0, minf=61 00:36:44.425 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:44.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.425 filename2: (groupid=0, jobs=1): err= 0: pid=2037510: Wed Nov 6 13:33:24 2024 00:36:44.425 read: IOPS=658, BW=2634KiB/s (2697kB/s)(25.8MiB/10012msec) 00:36:44.425 slat (nsec): min=5648, max=92195, avg=12522.55, stdev=11399.03 00:36:44.425 clat (usec): min=13208, max=28644, avg=24194.58, stdev=937.22 00:36:44.425 lat (usec): min=13229, max=28651, avg=24207.10, stdev=934.99 00:36:44.425 clat percentiles (usec): 00:36:44.425 | 1.00th=[20841], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.425 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.425 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.425 | 99.00th=[25297], 99.50th=[25560], 99.90th=[28443], 99.95th=[28705], 00:36:44.425 | 99.99th=[28705] 00:36:44.425 bw ( KiB/s): min= 2554, max= 2816, per=4.08%, avg=2633.16, stdev=77.58, samples=19 00:36:44.425 iops : min= 638, max= 704, avg=658.21, stdev=19.39, samples=19 00:36:44.425 lat (msec) : 20=0.73%, 50=99.27% 00:36:44.425 cpu : usr=98.99%, sys=0.75%, ctx=11, majf=0, minf=59 00:36:44.425 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:44.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.425 filename2: (groupid=0, jobs=1): err= 0: pid=2037511: Wed Nov 6 13:33:24 2024 00:36:44.425 read: IOPS=662, BW=2648KiB/s (2712kB/s)(25.9MiB/10003msec) 00:36:44.425 slat (usec): min=5, max=100, avg=20.47, stdev=12.40 00:36:44.425 clat (usec): min=2697, max=56868, avg=23978.58, stdev=2373.16 00:36:44.425 lat (usec): min=2703, max=56892, avg=23999.05, stdev=2374.39 00:36:44.425 clat percentiles (usec): 00:36:44.425 | 1.00th=[14615], 5.00th=[23200], 10.00th=[23725], 20.00th=[23987], 00:36:44.425 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:44.425 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:36:44.425 | 99.00th=[25822], 99.50th=[31327], 99.90th=[46924], 99.95th=[46924], 00:36:44.425 | 99.99th=[56886] 00:36:44.425 bw ( KiB/s): min= 2528, max= 2832, per=4.08%, avg=2632.00, stdev=82.00, samples=19 00:36:44.425 iops : min= 632, max= 708, avg=657.89, stdev=20.52, samples=19 00:36:44.425 lat (msec) : 4=0.33%, 10=0.39%, 20=1.90%, 50=97.34%, 100=0.03% 00:36:44.425 cpu : usr=98.53%, sys=0.97%, ctx=129, majf=0, minf=47 00:36:44.425 IO depths : 1=5.7%, 2=11.5%, 4=23.3%, 8=52.5%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:44.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.425 issued rwts: total=6622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.425 filename2: (groupid=0, jobs=1): err= 0: pid=2037512: Wed Nov 6 13:33:24 2024 00:36:44.425 read: IOPS=658, BW=2634KiB/s (2698kB/s)(25.8MiB/10012msec) 00:36:44.425 slat (nsec): min=5635, max=90844, avg=17225.46, stdev=11433.31 00:36:44.425 clat (usec): min=10516, max=38903, avg=24152.60, stdev=1526.21 00:36:44.425 lat (usec): min=10529, max=38909, avg=24169.83, stdev=1526.56 00:36:44.425 clat percentiles (usec): 00:36:44.425 | 1.00th=[17957], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.425 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:36:44.425 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.425 | 99.00th=[28705], 99.50th=[31327], 99.90th=[39060], 99.95th=[39060], 00:36:44.425 | 99.99th=[39060] 00:36:44.425 bw ( KiB/s): min= 2554, max= 2714, per=4.07%, avg=2626.63, stdev=66.77, samples=19 00:36:44.425 iops : min= 638, max= 678, avg=656.53, stdev=16.68, samples=19 00:36:44.425 lat (msec) : 20=1.96%, 50=98.04% 00:36:44.425 cpu : usr=98.69%, sys=0.93%, ctx=65, majf=0, minf=66 00:36:44.425 IO depths : 1=5.9%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:44.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.426 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.426 issued rwts: total=6594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.426 filename2: (groupid=0, jobs=1): err= 0: pid=2037513: Wed Nov 6 13:33:24 2024 00:36:44.426 read: IOPS=657, BW=2628KiB/s (2691kB/s)(25.7MiB/10006msec) 00:36:44.426 slat (nsec): min=5648, max=76880, avg=19696.18, stdev=11785.57 00:36:44.426 clat (usec): min=11995, max=40081, avg=24166.61, stdev=1468.36 00:36:44.426 lat (usec): min=12001, max=40118, avg=24186.31, stdev=1468.11 00:36:44.426 clat percentiles (usec): 00:36:44.426 | 1.00th=[18482], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:36:44.426 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:36:44.426 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:36:44.426 | 99.00th=[28705], 99.50th=[31589], 99.90th=[40109], 99.95th=[40109], 00:36:44.426 | 99.99th=[40109] 00:36:44.426 bw ( KiB/s): min= 2554, max= 2752, per=4.07%, avg=2625.53, stdev=69.02, samples=19 00:36:44.426 iops : min= 638, max= 688, avg=656.26, stdev=17.28, samples=19 00:36:44.426 lat (msec) : 20=1.51%, 50=98.49% 00:36:44.426 cpu : usr=99.05%, sys=0.69%, ctx=27, majf=0, minf=38 00:36:44.426 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:44.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.426 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:44.426 issued rwts: total=6574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:44.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:44.426 00:36:44.426 Run status group 0 (all jobs): 00:36:44.426 READ: bw=63.0MiB/s (66.1MB/s), 2583KiB/s-3908KiB/s (2645kB/s-4002kB/s), io=631MiB (662MB), run=10002-10020msec 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 bdev_null0 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 [2024-11-06 13:33:24.986128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 bdev_null1 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:44.426 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:44.426 { 00:36:44.426 "params": { 00:36:44.426 "name": "Nvme$subsystem", 00:36:44.426 "trtype": "$TEST_TRANSPORT", 00:36:44.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:44.426 "adrfam": "ipv4", 00:36:44.426 "trsvcid": "$NVMF_PORT", 00:36:44.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:44.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:44.426 "hdgst": ${hdgst:-false}, 00:36:44.426 "ddgst": ${ddgst:-false} 00:36:44.426 }, 00:36:44.426 "method": "bdev_nvme_attach_controller" 00:36:44.426 } 00:36:44.426 EOF 00:36:44.426 )") 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:44.427 { 00:36:44.427 "params": { 00:36:44.427 "name": "Nvme$subsystem", 00:36:44.427 "trtype": "$TEST_TRANSPORT", 00:36:44.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:44.427 "adrfam": "ipv4", 00:36:44.427 "trsvcid": "$NVMF_PORT", 00:36:44.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:44.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:44.427 "hdgst": ${hdgst:-false}, 00:36:44.427 "ddgst": ${ddgst:-false} 00:36:44.427 }, 00:36:44.427 "method": "bdev_nvme_attach_controller" 00:36:44.427 } 00:36:44.427 EOF 00:36:44.427 )") 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:44.427 "params": { 00:36:44.427 "name": "Nvme0", 00:36:44.427 "trtype": "tcp", 00:36:44.427 "traddr": "10.0.0.2", 00:36:44.427 "adrfam": "ipv4", 00:36:44.427 "trsvcid": "4420", 00:36:44.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.427 "hdgst": false, 00:36:44.427 "ddgst": false 00:36:44.427 }, 00:36:44.427 "method": "bdev_nvme_attach_controller" 00:36:44.427 },{ 00:36:44.427 "params": { 00:36:44.427 "name": "Nvme1", 00:36:44.427 "trtype": "tcp", 00:36:44.427 "traddr": "10.0.0.2", 00:36:44.427 "adrfam": "ipv4", 00:36:44.427 "trsvcid": "4420", 00:36:44.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:44.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:44.427 "hdgst": false, 00:36:44.427 "ddgst": false 00:36:44.427 }, 00:36:44.427 "method": "bdev_nvme_attach_controller" 00:36:44.427 }' 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:44.427 13:33:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:44.427 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:44.427 ... 00:36:44.427 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:44.427 ... 00:36:44.427 fio-3.35 00:36:44.427 Starting 4 threads 00:36:49.721 00:36:49.721 filename0: (groupid=0, jobs=1): err= 0: pid=2039808: Wed Nov 6 13:33:31 2024 00:36:49.721 read: IOPS=2962, BW=23.1MiB/s (24.3MB/s)(116MiB/5002msec) 00:36:49.721 slat (nsec): min=5445, max=70094, avg=6219.68, stdev=2190.30 00:36:49.721 clat (usec): min=1168, max=4632, avg=2682.85, stdev=245.37 00:36:49.721 lat (usec): min=1186, max=4638, avg=2689.07, stdev=245.13 00:36:49.721 clat percentiles (usec): 00:36:49.721 | 1.00th=[ 1958], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2540], 00:36:49.721 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:49.721 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2933], 95.00th=[ 2999], 00:36:49.721 | 99.00th=[ 3523], 99.50th=[ 3818], 99.90th=[ 4228], 99.95th=[ 4293], 00:36:49.721 | 99.99th=[ 4621] 00:36:49.721 bw ( KiB/s): min=23376, max=24528, per=25.12%, avg=23733.33, stdev=337.99, samples=9 00:36:49.721 iops : min= 2922, max= 3066, avg=2966.67, stdev=42.25, samples=9 00:36:49.721 lat (msec) : 2=1.23%, 4=98.50%, 10=0.28% 00:36:49.721 cpu : usr=96.24%, sys=3.36%, ctx=71, majf=0, minf=0 00:36:49.721 IO depths : 1=0.1%, 2=0.2%, 4=72.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.721 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.721 issued rwts: total=14819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.721 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:49.721 filename0: (groupid=0, jobs=1): err= 0: pid=2039809: Wed Nov 6 13:33:31 2024 00:36:49.721 read: IOPS=3037, BW=23.7MiB/s (24.9MB/s)(119MiB/5001msec) 00:36:49.721 slat (nsec): min=5457, max=72946, avg=8333.61, stdev=2041.22 00:36:49.721 clat (usec): min=1145, max=4815, avg=2612.82, stdev=394.26 00:36:49.721 lat (usec): min=1151, max=4839, avg=2621.16, stdev=394.17 00:36:49.721 clat percentiles (usec): 00:36:49.721 | 1.00th=[ 1860], 5.00th=[ 2024], 10.00th=[ 2147], 20.00th=[ 2278], 00:36:49.721 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:49.721 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 3163], 95.00th=[ 3458], 00:36:49.721 | 99.00th=[ 3720], 99.50th=[ 3982], 99.90th=[ 4424], 99.95th=[ 4490], 00:36:49.721 | 99.99th=[ 4555] 00:36:49.721 bw ( KiB/s): min=23952, max=24768, per=25.70%, avg=24280.89, stdev=276.79, samples=9 00:36:49.721 iops : min= 2994, max= 3096, avg=3035.11, stdev=34.60, samples=9 00:36:49.721 lat (msec) : 2=3.56%, 4=96.02%, 10=0.41% 00:36:49.721 cpu : usr=94.76%, sys=4.08%, ctx=141, majf=0, minf=9 00:36:49.721 IO depths : 1=0.1%, 2=0.8%, 4=68.0%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.721 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.721 issued rwts: total=15192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.721 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:49.721 filename1: (groupid=0, jobs=1): err= 0: pid=2039810: Wed Nov 6 13:33:31 2024 00:36:49.721 read: IOPS=2928, BW=22.9MiB/s (24.0MB/s)(114MiB/5002msec) 00:36:49.721 slat (nsec): min=5466, max=75381, avg=6092.95, stdev=1833.50 00:36:49.721 clat (usec): min=1142, max=6713, avg=2716.04, stdev=214.15 00:36:49.721 lat (usec): min=1160, max=6741, avg=2722.14, stdev=214.06 00:36:49.721 clat percentiles (usec): 00:36:49.721 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2638], 00:36:49.721 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:49.721 | 70.00th=[ 2737], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2966], 00:36:49.721 | 99.00th=[ 3359], 99.50th=[ 3687], 99.90th=[ 4146], 99.95th=[ 5211], 00:36:49.721 | 99.99th=[ 6652] 00:36:49.721 bw ( KiB/s): min=23328, max=23632, per=24.82%, avg=23447.00, stdev=110.77, samples=9 00:36:49.721 iops : min= 2916, max= 2954, avg=2930.78, stdev=13.75, samples=9 00:36:49.721 lat (msec) : 2=0.40%, 4=99.36%, 10=0.25% 00:36:49.721 cpu : usr=95.80%, sys=3.96%, ctx=5, majf=0, minf=9 00:36:49.721 IO depths : 1=0.1%, 2=0.1%, 4=69.7%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.721 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.721 issued rwts: total=14648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.721 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:49.721 filename1: (groupid=0, jobs=1): err= 0: pid=2039811: Wed Nov 6 13:33:31 2024 00:36:49.721 read: IOPS=2880, BW=22.5MiB/s (23.6MB/s)(113MiB/5001msec) 00:36:49.721 slat (nsec): min=5447, max=28574, avg=5957.02, stdev=1286.41 00:36:49.721 clat (usec): min=1325, max=45249, avg=2760.64, stdev=1023.91 00:36:49.721 lat (usec): min=1331, max=45278, avg=2766.60, stdev=1024.08 00:36:49.721 clat percentiles (usec): 00:36:49.721 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2638], 00:36:49.721 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:49.721 | 70.00th=[ 2737], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2966], 00:36:49.721 | 99.00th=[ 3785], 99.50th=[ 4015], 99.90th=[ 4424], 99.95th=[45351], 00:36:49.721 | 99.99th=[45351] 00:36:49.721 bw ( KiB/s): min=21008, max=23488, per=24.39%, avg=23043.56, stdev=773.43, samples=9 00:36:49.721 iops : min= 2626, max= 2936, avg=2880.44, stdev=96.68, samples=9 00:36:49.721 lat (msec) : 2=0.06%, 4=99.43%, 10=0.45%, 50=0.06% 00:36:49.721 cpu : usr=96.92%, sys=2.86%, ctx=5, majf=0, minf=9 00:36:49.721 IO depths : 1=0.1%, 2=0.1%, 4=73.9%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:49.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.721 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:49.721 issued rwts: total=14405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:49.721 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:49.721 00:36:49.721 Run status group 0 (all jobs): 00:36:49.721 READ: bw=92.2MiB/s (96.7MB/s), 22.5MiB/s-23.7MiB/s (23.6MB/s-24.9MB/s), io=461MiB (484MB), run=5001-5002msec 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.721 00:36:49.721 real 0m24.592s 00:36:49.721 user 5m16.391s 00:36:49.721 sys 0m4.876s 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:49.721 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:49.721 ************************************ 00:36:49.721 END TEST fio_dif_rand_params 00:36:49.721 ************************************ 00:36:49.721 13:33:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:49.721 13:33:31 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:49.721 13:33:31 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:49.721 13:33:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:49.983 ************************************ 00:36:49.983 START TEST fio_dif_digest 00:36:49.983 ************************************ 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:49.983 bdev_null0 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:49.983 [2024-11-06 13:33:31.666902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:49.983 { 00:36:49.983 "params": { 00:36:49.983 "name": "Nvme$subsystem", 00:36:49.983 "trtype": "$TEST_TRANSPORT", 00:36:49.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:49.983 "adrfam": "ipv4", 00:36:49.983 "trsvcid": "$NVMF_PORT", 00:36:49.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:49.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:49.983 "hdgst": ${hdgst:-false}, 00:36:49.983 "ddgst": ${ddgst:-false} 00:36:49.983 }, 00:36:49.983 "method": "bdev_nvme_attach_controller" 00:36:49.983 } 00:36:49.983 EOF 00:36:49.983 )") 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:49.983 13:33:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:49.983 "params": { 00:36:49.983 "name": "Nvme0", 00:36:49.983 "trtype": "tcp", 00:36:49.983 "traddr": "10.0.0.2", 00:36:49.983 "adrfam": "ipv4", 00:36:49.983 "trsvcid": "4420", 00:36:49.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.983 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.984 "hdgst": true, 00:36:49.984 "ddgst": true 00:36:49.984 }, 00:36:49.984 "method": "bdev_nvme_attach_controller" 00:36:49.984 }' 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:49.984 13:33:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:50.245 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:50.245 ... 00:36:50.245 fio-3.35 00:36:50.245 Starting 3 threads 00:37:02.482 00:37:02.482 filename0: (groupid=0, jobs=1): err= 0: pid=2041212: Wed Nov 6 13:33:42 2024 00:37:02.482 read: IOPS=315, BW=39.4MiB/s (41.3MB/s)(396MiB/10044msec) 00:37:02.482 slat (nsec): min=5810, max=39547, avg=6518.63, stdev=962.22 00:37:02.482 clat (usec): min=4946, max=51374, avg=9495.80, stdev=2137.83 00:37:02.482 lat (usec): min=4952, max=51409, avg=9502.32, stdev=2138.10 00:37:02.482 clat percentiles (usec): 00:37:02.482 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7898], 00:37:02.482 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10028], 00:37:02.482 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:37:02.482 | 99.00th=[12125], 99.50th=[12387], 99.90th=[50070], 99.95th=[51119], 00:37:02.482 | 99.99th=[51119] 00:37:02.482 bw ( KiB/s): min=37376, max=44800, per=34.57%, avg=40499.20, stdev=1891.45, samples=20 00:37:02.482 iops : min= 292, max= 350, avg=316.40, stdev=14.78, samples=20 00:37:02.482 lat (msec) : 10=57.71%, 20=42.14%, 50=0.03%, 100=0.13% 00:37:02.482 cpu : usr=94.21%, sys=5.55%, ctx=19, majf=0, minf=181 00:37:02.482 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.482 issued rwts: total=3166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:02.482 filename0: (groupid=0, jobs=1): err= 0: pid=2041213: Wed Nov 6 13:33:42 2024 00:37:02.482 read: IOPS=381, BW=47.7MiB/s (50.1MB/s)(480MiB/10046msec) 00:37:02.482 slat (nsec): min=5815, max=34608, avg=6464.31, stdev=814.34 00:37:02.482 clat (usec): min=4903, max=51556, avg=7835.21, stdev=2731.68 00:37:02.482 lat (usec): min=4909, max=51563, avg=7841.68, stdev=2731.68 00:37:02.482 clat percentiles (usec): 00:37:02.482 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6456], 00:37:02.482 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8160], 00:37:02.482 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9241], 00:37:02.482 | 99.00th=[ 9765], 99.50th=[10814], 99.90th=[50070], 99.95th=[50594], 00:37:02.482 | 99.99th=[51643] 00:37:02.482 bw ( KiB/s): min=45568, max=52736, per=41.89%, avg=49075.20, stdev=1831.22, samples=20 00:37:02.482 iops : min= 356, max= 412, avg=383.40, stdev=14.31, samples=20 00:37:02.482 lat (msec) : 10=99.32%, 20=0.31%, 50=0.23%, 100=0.13% 00:37:02.482 cpu : usr=96.17%, sys=3.61%, ctx=28, majf=0, minf=113 00:37:02.482 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.482 issued rwts: total=3837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.483 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:02.483 filename0: (groupid=0, jobs=1): err= 0: pid=2041214: Wed Nov 6 13:33:42 2024 00:37:02.483 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(274MiB/10045msec) 00:37:02.483 slat (nsec): min=5871, max=30607, avg=6907.40, stdev=1096.69 00:37:02.483 clat (msec): min=6, max=130, avg=13.73, stdev=11.96 00:37:02.483 lat (msec): min=6, max=130, avg=13.73, stdev=11.96 00:37:02.483 clat percentiles (msec): 00:37:02.483 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:37:02.483 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:37:02.483 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 51], 00:37:02.483 | 99.00th=[ 53], 99.50th=[ 53], 99.90th=[ 92], 99.95th=[ 92], 00:37:02.483 | 99.99th=[ 131] 00:37:02.483 bw ( KiB/s): min=19968, max=36864, per=23.92%, avg=28019.20, stdev=4195.40, samples=20 00:37:02.483 iops : min= 156, max= 288, avg=218.90, stdev=32.78, samples=20 00:37:02.483 lat (msec) : 10=32.91%, 20=58.88%, 50=1.92%, 100=6.25%, 250=0.05% 00:37:02.483 cpu : usr=95.15%, sys=4.62%, ctx=24, majf=0, minf=87 00:37:02.483 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.483 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.483 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:02.483 00:37:02.483 Run status group 0 (all jobs): 00:37:02.483 READ: bw=114MiB/s (120MB/s), 27.3MiB/s-47.7MiB/s (28.6MB/s-50.1MB/s), io=1149MiB (1205MB), run=10044-10046msec 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.483 00:37:02.483 real 0m11.215s 00:37:02.483 user 0m45.192s 00:37:02.483 sys 0m1.703s 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:02.483 13:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:02.483 ************************************ 00:37:02.483 END TEST fio_dif_digest 00:37:02.483 ************************************ 00:37:02.483 13:33:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:02.483 13:33:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:02.483 rmmod nvme_tcp 00:37:02.483 rmmod nvme_fabrics 00:37:02.483 rmmod nvme_keyring 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2030289 ']' 00:37:02.483 13:33:42 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2030289 00:37:02.483 13:33:42 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 2030289 ']' 00:37:02.483 13:33:42 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 2030289 00:37:02.483 13:33:42 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:37:02.483 13:33:42 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:02.483 13:33:42 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2030289 00:37:02.483 13:33:43 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:02.483 13:33:43 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:02.483 13:33:43 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2030289' 00:37:02.483 killing process with pid 2030289 00:37:02.483 13:33:43 nvmf_dif -- common/autotest_common.sh@971 -- # kill 2030289 00:37:02.483 13:33:43 nvmf_dif -- common/autotest_common.sh@976 -- # wait 2030289 00:37:02.483 13:33:43 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:02.483 13:33:43 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:05.029 Waiting for block devices as requested 00:37:05.029 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:05.029 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:05.029 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:05.029 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:05.029 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:05.290 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:05.290 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:05.290 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:05.551 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:05.551 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:05.812 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:05.812 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:05.812 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:06.072 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:06.072 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:06.072 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:06.072 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:06.643 13:33:48 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:06.643 13:33:48 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:06.643 13:33:48 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:06.643 13:33:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:06.643 13:33:48 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:06.644 13:33:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:06.644 13:33:48 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:06.644 13:33:48 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:06.644 13:33:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:06.644 13:33:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:06.644 13:33:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.557 13:33:50 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:08.557 00:37:08.557 real 1m18.776s 00:37:08.557 user 8m0.583s 00:37:08.557 sys 0m22.226s 00:37:08.557 13:33:50 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:08.557 13:33:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:08.557 ************************************ 00:37:08.557 END TEST nvmf_dif 00:37:08.557 ************************************ 00:37:08.557 13:33:50 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:08.557 13:33:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:08.557 13:33:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:08.557 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:37:08.557 ************************************ 00:37:08.557 START TEST nvmf_abort_qd_sizes 00:37:08.557 ************************************ 00:37:08.557 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:08.819 * Looking for test storage... 00:37:08.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.819 --rc genhtml_branch_coverage=1 00:37:08.819 --rc genhtml_function_coverage=1 00:37:08.819 --rc genhtml_legend=1 00:37:08.819 --rc geninfo_all_blocks=1 00:37:08.819 --rc geninfo_unexecuted_blocks=1 00:37:08.819 00:37:08.819 ' 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.819 --rc genhtml_branch_coverage=1 00:37:08.819 --rc genhtml_function_coverage=1 00:37:08.819 --rc genhtml_legend=1 00:37:08.819 --rc geninfo_all_blocks=1 00:37:08.819 --rc geninfo_unexecuted_blocks=1 00:37:08.819 00:37:08.819 ' 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.819 --rc genhtml_branch_coverage=1 00:37:08.819 --rc genhtml_function_coverage=1 00:37:08.819 --rc genhtml_legend=1 00:37:08.819 --rc geninfo_all_blocks=1 00:37:08.819 --rc geninfo_unexecuted_blocks=1 00:37:08.819 00:37:08.819 ' 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:08.819 --rc genhtml_branch_coverage=1 00:37:08.819 --rc genhtml_function_coverage=1 00:37:08.819 --rc genhtml_legend=1 00:37:08.819 --rc geninfo_all_blocks=1 00:37:08.819 --rc geninfo_unexecuted_blocks=1 00:37:08.819 00:37:08.819 ' 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:08.819 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:08.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:08.820 13:33:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:16.959 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:16.960 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:16.960 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:16.960 Found net devices under 0000:31:00.0: cvl_0_0 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:16.960 Found net devices under 0000:31:00.1: cvl_0_1 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:16.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:16.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:37:16.960 00:37:16.960 --- 10.0.0.2 ping statistics --- 00:37:16.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:16.960 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:16.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:16.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:37:16.960 00:37:16.960 --- 10.0.0.1 ping statistics --- 00:37:16.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:16.960 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:16.960 13:33:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:19.505 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:19.505 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:19.505 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:19.505 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:19.505 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:19.505 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:19.505 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:19.505 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:19.765 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:19.765 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:19.765 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:19.765 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:19.765 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:19.765 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:19.765 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:19.765 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:19.765 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:20.025 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:20.025 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:20.025 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:20.025 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:20.025 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:20.025 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2050811 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2050811 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 2050811 ']' 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:20.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:20.286 13:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:20.286 [2024-11-06 13:34:02.014004] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:37:20.286 [2024-11-06 13:34:02.014067] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:20.286 [2024-11-06 13:34:02.113802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:20.286 [2024-11-06 13:34:02.167473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:20.286 [2024-11-06 13:34:02.167525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:20.286 [2024-11-06 13:34:02.167534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:20.286 [2024-11-06 13:34:02.167541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:20.286 [2024-11-06 13:34:02.167547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:20.286 [2024-11-06 13:34:02.169892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:20.286 [2024-11-06 13:34:02.170040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:20.286 [2024-11-06 13:34:02.170200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.286 [2024-11-06 13:34:02.170200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:21.227 13:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:21.227 ************************************ 00:37:21.227 START TEST spdk_target_abort 00:37:21.227 ************************************ 00:37:21.227 13:34:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:37:21.227 13:34:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:21.227 13:34:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:21.227 13:34:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.227 13:34:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:21.488 spdk_targetn1 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:21.488 [2024-11-06 13:34:03.226525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:21.488 [2024-11-06 13:34:03.278839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:21.488 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:21.489 13:34:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:21.780 [2024-11-06 13:34:03.476707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.476743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0006 p:1 m:0 dnr:0 00:37:21.780 [2024-11-06 13:34:03.477965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:104 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.477985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:000f p:1 m:0 dnr:0 00:37:21.780 [2024-11-06 13:34:03.508281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1008 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.508303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:007f p:1 m:0 dnr:0 00:37:21.780 [2024-11-06 13:34:03.511080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1168 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.511099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0094 p:1 m:0 dnr:0 00:37:21.780 [2024-11-06 13:34:03.533653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1880 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.533674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00ed p:1 m:0 dnr:0 00:37:21.780 [2024-11-06 13:34:03.535301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1976 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.535318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f9 p:1 m:0 dnr:0 00:37:21.780 [2024-11-06 13:34:03.548320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2384 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.548340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:21.780 [2024-11-06 13:34:03.556298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2632 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.556317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:21.780 [2024-11-06 13:34:03.558743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2784 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.558767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:21.780 [2024-11-06 13:34:03.596737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3968 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:21.780 [2024-11-06 13:34:03.596762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00f2 p:0 m:0 dnr:0 00:37:25.079 Initializing NVMe Controllers 00:37:25.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:25.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:25.079 Initialization complete. Launching workers. 00:37:25.079 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11757, failed: 10 00:37:25.079 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2398, failed to submit 9369 00:37:25.079 success 770, unsuccessful 1628, failed 0 00:37:25.079 13:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:25.079 13:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:25.079 [2024-11-06 13:34:06.666060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:432 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:37:25.079 [2024-11-06 13:34:06.666092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:37:25.079 [2024-11-06 13:34:06.673927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:608 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:37:25.079 [2024-11-06 13:34:06.673951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:37:25.079 [2024-11-06 13:34:06.784857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:3184 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:37:25.079 [2024-11-06 13:34:06.784886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0090 p:0 m:0 dnr:0 00:37:25.079 [2024-11-06 13:34:06.792858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:3432 len:8 PRP1 0x200004e3c000 PRP2 0x0 00:37:25.079 [2024-11-06 13:34:06.792881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:37:25.079 [2024-11-06 13:34:06.808837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:3688 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:37:25.079 [2024-11-06 13:34:06.808861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:00d8 p:0 m:0 dnr:0 00:37:25.079 [2024-11-06 13:34:06.824898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:4072 len:8 PRP1 0x200004e3c000 PRP2 0x0 00:37:25.079 [2024-11-06 13:34:06.824920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:37:25.079 [2024-11-06 13:34:06.920850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:6280 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:37:25.079 [2024-11-06 13:34:06.920877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:25.340 [2024-11-06 13:34:07.039821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:8912 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:37:25.340 [2024-11-06 13:34:07.039849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0068 p:1 m:0 dnr:0 00:37:25.340 [2024-11-06 13:34:07.135841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:11192 len:8 PRP1 0x200004e52000 PRP2 0x0 00:37:25.340 [2024-11-06 13:34:07.135867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:25.911 [2024-11-06 13:34:07.707785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:24096 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:37:25.911 [2024-11-06 13:34:07.707816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:00ca p:0 m:0 dnr:0 00:37:26.172 [2024-11-06 13:34:08.038084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:31672 len:8 PRP1 0x200004e56000 PRP2 0x0 00:37:26.172 [2024-11-06 13:34:08.038113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:28.084 [2024-11-06 13:34:09.787122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45410 is same with the state(6) to be set 00:37:28.084 [2024-11-06 13:34:09.787146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45410 is same with the state(6) to be set 00:37:28.084 Initializing NVMe Controllers 00:37:28.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:28.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:28.084 Initialization complete. Launching workers. 00:37:28.084 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8639, failed: 11 00:37:28.084 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1232, failed to submit 7418 00:37:28.084 success 302, unsuccessful 930, failed 0 00:37:28.084 13:34:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:28.084 13:34:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:29.026 [2024-11-06 13:34:10.868265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:182 nsid:1 lba:90992 len:8 PRP1 0x200004ae8000 PRP2 0x0 00:37:29.026 [2024-11-06 13:34:10.868302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:182 cdw0:0 sqhd:0066 p:1 m:0 dnr:0 00:37:29.602 [2024-11-06 13:34:11.211188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:130072 len:8 PRP1 0x200004b16000 PRP2 0x0 00:37:29.602 [2024-11-06 13:34:11.211227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0082 p:0 m:0 dnr:0 00:37:31.615 Initializing NVMe Controllers 00:37:31.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:31.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:31.615 Initialization complete. Launching workers. 00:37:31.615 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42984, failed: 2 00:37:31.615 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2670, failed to submit 40316 00:37:31.615 success 597, unsuccessful 2073, failed 0 00:37:31.615 13:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:31.615 13:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.615 13:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:31.615 13:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.615 13:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:31.615 13:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.615 13:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.527 13:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.527 13:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2050811 00:37:33.527 13:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 2050811 ']' 00:37:33.527 13:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 2050811 00:37:33.527 13:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:37:33.527 13:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:33.527 13:34:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2050811 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2050811' 00:37:33.527 killing process with pid 2050811 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 2050811 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 2050811 00:37:33.527 00:37:33.527 real 0m12.214s 00:37:33.527 user 0m49.624s 00:37:33.527 sys 0m2.093s 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.527 ************************************ 00:37:33.527 END TEST spdk_target_abort 00:37:33.527 ************************************ 00:37:33.527 13:34:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:33.527 13:34:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:33.527 13:34:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:33.527 13:34:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:33.527 ************************************ 00:37:33.527 START TEST kernel_target_abort 00:37:33.527 ************************************ 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:33.527 13:34:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:36.906 Waiting for block devices as requested 00:37:36.906 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:36.906 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:37.166 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:37.166 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:37.166 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:37.427 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:37.427 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:37.427 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:37.687 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:37.687 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:37.952 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:37.952 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:37.952 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:38.215 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:38.215 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:38.215 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:38.476 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:38.737 No valid GPT data, bailing 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:38.737 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:37:38.998 00:37:38.998 Discovery Log Number of Records 2, Generation counter 2 00:37:38.998 =====Discovery Log Entry 0====== 00:37:38.998 trtype: tcp 00:37:38.998 adrfam: ipv4 00:37:38.998 subtype: current discovery subsystem 00:37:38.998 treq: not specified, sq flow control disable supported 00:37:38.998 portid: 1 00:37:38.998 trsvcid: 4420 00:37:38.998 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:38.998 traddr: 10.0.0.1 00:37:38.998 eflags: none 00:37:38.998 sectype: none 00:37:38.998 =====Discovery Log Entry 1====== 00:37:38.998 trtype: tcp 00:37:38.998 adrfam: ipv4 00:37:38.998 subtype: nvme subsystem 00:37:38.998 treq: not specified, sq flow control disable supported 00:37:38.998 portid: 1 00:37:38.998 trsvcid: 4420 00:37:38.998 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:38.998 traddr: 10.0.0.1 00:37:38.998 eflags: none 00:37:38.998 sectype: none 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:38.998 13:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:42.299 Initializing NVMe Controllers 00:37:42.299 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:42.299 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:42.299 Initialization complete. Launching workers. 00:37:42.299 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67236, failed: 0 00:37:42.299 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67236, failed to submit 0 00:37:42.299 success 0, unsuccessful 67236, failed 0 00:37:42.299 13:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:42.299 13:34:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:45.595 Initializing NVMe Controllers 00:37:45.595 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:45.595 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:45.595 Initialization complete. Launching workers. 00:37:45.595 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 117045, failed: 0 00:37:45.595 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29456, failed to submit 87589 00:37:45.595 success 0, unsuccessful 29456, failed 0 00:37:45.595 13:34:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:45.595 13:34:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:48.136 Initializing NVMe Controllers 00:37:48.136 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:48.136 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:48.136 Initialization complete. Launching workers. 00:37:48.136 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146491, failed: 0 00:37:48.136 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36638, failed to submit 109853 00:37:48.136 success 0, unsuccessful 36638, failed 0 00:37:48.136 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:48.136 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:48.136 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:48.136 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:48.136 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:48.397 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:48.397 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:48.397 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:48.397 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:48.397 13:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:51.697 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:51.697 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:51.697 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:51.958 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:53.869 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:54.128 00:37:54.128 real 0m20.672s 00:37:54.128 user 0m10.039s 00:37:54.128 sys 0m6.238s 00:37:54.128 13:34:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:54.128 13:34:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.128 ************************************ 00:37:54.128 END TEST kernel_target_abort 00:37:54.128 ************************************ 00:37:54.128 13:34:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:54.128 13:34:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:54.128 13:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:54.128 13:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:54.128 13:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:54.128 13:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:54.128 13:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:54.128 13:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:54.128 rmmod nvme_tcp 00:37:54.128 rmmod nvme_fabrics 00:37:54.128 rmmod nvme_keyring 00:37:54.128 13:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:54.128 13:34:36 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:54.128 13:34:36 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:54.128 13:34:36 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2050811 ']' 00:37:54.128 13:34:36 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2050811 00:37:54.128 13:34:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 2050811 ']' 00:37:54.128 13:34:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 2050811 00:37:54.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2050811) - No such process 00:37:54.128 13:34:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 2050811 is not found' 00:37:54.128 Process with pid 2050811 is not found 00:37:54.128 13:34:36 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:54.128 13:34:36 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:58.329 Waiting for block devices as requested 00:37:58.329 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:58.329 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:58.329 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:58.329 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:58.329 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:58.329 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:58.329 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:58.329 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:58.329 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:58.589 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:58.589 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:58.849 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:58.849 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:58.849 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:59.110 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:59.110 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:59.110 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:59.371 13:34:41 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:59.371 13:34:41 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:59.371 13:34:41 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:59.371 13:34:41 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:59.371 13:34:41 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:59.371 13:34:41 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:59.631 13:34:41 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:59.631 13:34:41 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:59.631 13:34:41 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.631 13:34:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:59.631 13:34:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:01.543 13:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:01.543 00:38:01.543 real 0m52.908s 00:38:01.543 user 1m5.098s 00:38:01.543 sys 0m19.489s 00:38:01.543 13:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:01.543 13:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:01.543 ************************************ 00:38:01.543 END TEST nvmf_abort_qd_sizes 00:38:01.543 ************************************ 00:38:01.543 13:34:43 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:01.543 13:34:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:01.543 13:34:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:01.544 13:34:43 -- common/autotest_common.sh@10 -- # set +x 00:38:01.544 ************************************ 00:38:01.544 START TEST keyring_file 00:38:01.544 ************************************ 00:38:01.544 13:34:43 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:01.805 * Looking for test storage... 00:38:01.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:01.805 13:34:43 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:01.805 13:34:43 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:38:01.805 13:34:43 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:01.805 13:34:43 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:01.805 13:34:43 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:01.805 13:34:43 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:01.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.805 --rc genhtml_branch_coverage=1 00:38:01.805 --rc genhtml_function_coverage=1 00:38:01.805 --rc genhtml_legend=1 00:38:01.805 --rc geninfo_all_blocks=1 00:38:01.805 --rc geninfo_unexecuted_blocks=1 00:38:01.805 00:38:01.805 ' 00:38:01.805 13:34:43 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:01.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.805 --rc genhtml_branch_coverage=1 00:38:01.805 --rc genhtml_function_coverage=1 00:38:01.805 --rc genhtml_legend=1 00:38:01.805 --rc geninfo_all_blocks=1 00:38:01.805 --rc geninfo_unexecuted_blocks=1 00:38:01.805 00:38:01.805 ' 00:38:01.805 13:34:43 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:01.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.805 --rc genhtml_branch_coverage=1 00:38:01.805 --rc genhtml_function_coverage=1 00:38:01.805 --rc genhtml_legend=1 00:38:01.805 --rc geninfo_all_blocks=1 00:38:01.805 --rc geninfo_unexecuted_blocks=1 00:38:01.805 00:38:01.805 ' 00:38:01.805 13:34:43 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:01.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:01.805 --rc genhtml_branch_coverage=1 00:38:01.805 --rc genhtml_function_coverage=1 00:38:01.805 --rc genhtml_legend=1 00:38:01.805 --rc geninfo_all_blocks=1 00:38:01.805 --rc geninfo_unexecuted_blocks=1 00:38:01.805 00:38:01.805 ' 00:38:01.805 13:34:43 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:01.805 13:34:43 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.805 13:34:43 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.805 13:34:43 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.805 13:34:43 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.805 13:34:43 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.805 13:34:43 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:01.805 13:34:43 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:01.805 13:34:43 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:01.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:01.806 13:34:43 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:01.806 13:34:43 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:01.806 13:34:43 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:01.806 13:34:43 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:01.806 13:34:43 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:01.806 13:34:43 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:01.806 13:34:43 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:01.806 13:34:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:01.806 13:34:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:01.806 13:34:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:01.806 13:34:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:01.806 13:34:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:01.806 13:34:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0Gx4VgCLsJ 00:38:01.806 13:34:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:01.806 13:34:43 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0Gx4VgCLsJ 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0Gx4VgCLsJ 00:38:02.066 13:34:43 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.0Gx4VgCLsJ 00:38:02.066 13:34:43 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9wMYTNB0T5 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:02.066 13:34:43 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:02.066 13:34:43 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:02.066 13:34:43 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:02.066 13:34:43 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:02.066 13:34:43 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:02.066 13:34:43 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9wMYTNB0T5 00:38:02.066 13:34:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9wMYTNB0T5 00:38:02.066 13:34:43 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9wMYTNB0T5 00:38:02.066 13:34:43 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:02.066 13:34:43 keyring_file -- keyring/file.sh@30 -- # tgtpid=2061042 00:38:02.066 13:34:43 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2061042 00:38:02.066 13:34:43 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2061042 ']' 00:38:02.066 13:34:43 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.067 13:34:43 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:02.067 13:34:43 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.067 13:34:43 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:02.067 13:34:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:02.067 [2024-11-06 13:34:43.842092] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:38:02.067 [2024-11-06 13:34:43.842169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061042 ] 00:38:02.067 [2024-11-06 13:34:43.935701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.326 [2024-11-06 13:34:43.988641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:02.896 13:34:44 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:02.896 [2024-11-06 13:34:44.660577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.896 null0 00:38:02.896 [2024-11-06 13:34:44.692626] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:02.896 [2024-11-06 13:34:44.692926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.896 13:34:44 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:02.896 [2024-11-06 13:34:44.724693] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:02.896 request: 00:38:02.896 { 00:38:02.896 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:02.896 "secure_channel": false, 00:38:02.896 "listen_address": { 00:38:02.896 "trtype": "tcp", 00:38:02.896 "traddr": "127.0.0.1", 00:38:02.896 "trsvcid": "4420" 00:38:02.896 }, 00:38:02.896 "method": "nvmf_subsystem_add_listener", 00:38:02.896 "req_id": 1 00:38:02.896 } 00:38:02.896 Got JSON-RPC error response 00:38:02.896 response: 00:38:02.896 { 00:38:02.896 "code": -32602, 00:38:02.896 "message": "Invalid parameters" 00:38:02.896 } 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:02.896 13:34:44 keyring_file -- keyring/file.sh@47 -- # bperfpid=2061072 00:38:02.896 13:34:44 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2061072 /var/tmp/bperf.sock 00:38:02.896 13:34:44 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2061072 ']' 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:02.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:02.896 13:34:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:02.897 [2024-11-06 13:34:44.791129] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:38:02.897 [2024-11-06 13:34:44.791178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061072 ] 00:38:03.156 [2024-11-06 13:34:44.879441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.156 [2024-11-06 13:34:44.916657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.728 13:34:45 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:03.728 13:34:45 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:03.728 13:34:45 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0Gx4VgCLsJ 00:38:03.728 13:34:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0Gx4VgCLsJ 00:38:03.988 13:34:45 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9wMYTNB0T5 00:38:03.988 13:34:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9wMYTNB0T5 00:38:04.248 13:34:45 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:04.248 13:34:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:04.248 13:34:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:04.248 13:34:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.248 13:34:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:04.248 13:34:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0Gx4VgCLsJ == \/\t\m\p\/\t\m\p\.\0\G\x\4\V\g\C\L\s\J ]] 00:38:04.248 13:34:46 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:04.248 13:34:46 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:04.248 13:34:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:04.248 13:34:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:04.248 13:34:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.509 13:34:46 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.9wMYTNB0T5 == \/\t\m\p\/\t\m\p\.\9\w\M\Y\T\N\B\0\T\5 ]] 00:38:04.509 13:34:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:04.509 13:34:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:04.509 13:34:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:04.509 13:34:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:04.509 13:34:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.509 13:34:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:04.769 13:34:46 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:04.770 13:34:46 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:04.770 13:34:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:04.770 13:34:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:04.770 13:34:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:04.770 13:34:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:04.770 13:34:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:05.031 13:34:46 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:05.031 13:34:46 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:05.031 13:34:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:05.031 [2024-11-06 13:34:46.900326] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:05.291 nvme0n1 00:38:05.291 13:34:46 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:05.291 13:34:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:05.291 13:34:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.291 13:34:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.291 13:34:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.291 13:34:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:05.552 13:34:47 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:05.552 13:34:47 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:05.552 13:34:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:05.552 13:34:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:05.552 13:34:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:05.552 13:34:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:05.552 13:34:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:05.552 13:34:47 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:05.552 13:34:47 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:05.552 Running I/O for 1 seconds... 00:38:06.934 19902.00 IOPS, 77.74 MiB/s 00:38:06.934 Latency(us) 00:38:06.934 [2024-11-06T12:34:48.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.934 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:06.934 nvme0n1 : 1.00 19957.44 77.96 0.00 0.00 6402.85 2280.11 12397.23 00:38:06.934 [2024-11-06T12:34:48.836Z] =================================================================================================================== 00:38:06.934 [2024-11-06T12:34:48.836Z] Total : 19957.44 77.96 0.00 0.00 6402.85 2280.11 12397.23 00:38:06.934 { 00:38:06.934 "results": [ 00:38:06.934 { 00:38:06.934 "job": "nvme0n1", 00:38:06.934 "core_mask": "0x2", 00:38:06.934 "workload": "randrw", 00:38:06.934 "percentage": 50, 00:38:06.934 "status": "finished", 00:38:06.934 "queue_depth": 128, 00:38:06.934 "io_size": 4096, 00:38:06.934 "runtime": 1.003686, 00:38:06.934 "iops": 19957.436887632186, 00:38:06.934 "mibps": 77.95873784231323, 00:38:06.934 "io_failed": 0, 00:38:06.934 "io_timeout": 0, 00:38:06.934 "avg_latency_us": 6402.85168655251, 00:38:06.934 "min_latency_us": 2280.1066666666666, 00:38:06.934 "max_latency_us": 12397.226666666667 00:38:06.934 } 00:38:06.934 ], 00:38:06.934 "core_count": 1 00:38:06.934 } 00:38:06.934 13:34:48 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:06.934 13:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:06.934 13:34:48 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:06.934 13:34:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:06.934 13:34:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:06.934 13:34:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:06.934 13:34:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:06.935 13:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.196 13:34:48 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:07.196 13:34:48 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:07.196 13:34:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:07.196 13:34:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:07.196 13:34:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.196 13:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.196 13:34:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:07.196 13:34:49 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:07.196 13:34:49 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:07.196 13:34:49 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:07.196 13:34:49 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:07.196 13:34:49 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:07.196 13:34:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:07.196 13:34:49 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:07.196 13:34:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:07.196 13:34:49 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:07.196 13:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:07.457 [2024-11-06 13:34:49.205381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:07.457 [2024-11-06 13:34:49.206294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2266cb0 (107): Transport endpoint is not connected 00:38:07.457 [2024-11-06 13:34:49.207290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2266cb0 (9): Bad file descriptor 00:38:07.457 [2024-11-06 13:34:49.208292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:07.457 [2024-11-06 13:34:49.208301] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:07.457 [2024-11-06 13:34:49.208312] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:07.457 [2024-11-06 13:34:49.208318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:07.457 request: 00:38:07.457 { 00:38:07.457 "name": "nvme0", 00:38:07.457 "trtype": "tcp", 00:38:07.457 "traddr": "127.0.0.1", 00:38:07.457 "adrfam": "ipv4", 00:38:07.457 "trsvcid": "4420", 00:38:07.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:07.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:07.457 "prchk_reftag": false, 00:38:07.457 "prchk_guard": false, 00:38:07.457 "hdgst": false, 00:38:07.457 "ddgst": false, 00:38:07.457 "psk": "key1", 00:38:07.457 "allow_unrecognized_csi": false, 00:38:07.457 "method": "bdev_nvme_attach_controller", 00:38:07.457 "req_id": 1 00:38:07.457 } 00:38:07.457 Got JSON-RPC error response 00:38:07.457 response: 00:38:07.457 { 00:38:07.457 "code": -5, 00:38:07.457 "message": "Input/output error" 00:38:07.457 } 00:38:07.457 13:34:49 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:07.457 13:34:49 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:07.457 13:34:49 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:07.457 13:34:49 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:07.457 13:34:49 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:07.457 13:34:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:07.457 13:34:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:07.457 13:34:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.457 13:34:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:07.457 13:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.717 13:34:49 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:07.717 13:34:49 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:07.717 13:34:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:07.717 13:34:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:07.717 13:34:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:07.717 13:34:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:07.717 13:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:07.717 13:34:49 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:07.717 13:34:49 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:07.717 13:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:07.979 13:34:49 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:07.979 13:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:08.239 13:34:49 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:08.239 13:34:49 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:08.239 13:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.239 13:34:50 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:08.239 13:34:50 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.0Gx4VgCLsJ 00:38:08.239 13:34:50 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.0Gx4VgCLsJ 00:38:08.239 13:34:50 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:08.239 13:34:50 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.0Gx4VgCLsJ 00:38:08.239 13:34:50 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:08.239 13:34:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.239 13:34:50 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:08.239 13:34:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.239 13:34:50 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0Gx4VgCLsJ 00:38:08.239 13:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0Gx4VgCLsJ 00:38:08.501 [2024-11-06 13:34:50.230948] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0Gx4VgCLsJ': 0100660 00:38:08.501 [2024-11-06 13:34:50.230968] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:08.501 request: 00:38:08.501 { 00:38:08.501 "name": "key0", 00:38:08.501 "path": "/tmp/tmp.0Gx4VgCLsJ", 00:38:08.501 "method": "keyring_file_add_key", 00:38:08.501 "req_id": 1 00:38:08.501 } 00:38:08.501 Got JSON-RPC error response 00:38:08.501 response: 00:38:08.501 { 00:38:08.501 "code": -1, 00:38:08.501 "message": "Operation not permitted" 00:38:08.501 } 00:38:08.501 13:34:50 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:08.501 13:34:50 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:08.501 13:34:50 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:08.501 13:34:50 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:08.501 13:34:50 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.0Gx4VgCLsJ 00:38:08.501 13:34:50 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0Gx4VgCLsJ 00:38:08.501 13:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0Gx4VgCLsJ 00:38:08.761 13:34:50 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.0Gx4VgCLsJ 00:38:08.761 13:34:50 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:08.762 13:34:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:08.762 13:34:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:08.762 13:34:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:08.762 13:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:08.762 13:34:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:08.762 13:34:50 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:08.762 13:34:50 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:08.762 13:34:50 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:08.762 13:34:50 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:08.762 13:34:50 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:08.762 13:34:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.762 13:34:50 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:08.762 13:34:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:08.762 13:34:50 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:08.762 13:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.023 [2024-11-06 13:34:50.772312] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.0Gx4VgCLsJ': No such file or directory 00:38:09.023 [2024-11-06 13:34:50.772325] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:09.023 [2024-11-06 13:34:50.772338] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:09.023 [2024-11-06 13:34:50.772343] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:09.023 [2024-11-06 13:34:50.772349] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:09.023 [2024-11-06 13:34:50.772354] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:09.023 request: 00:38:09.023 { 00:38:09.023 "name": "nvme0", 00:38:09.023 "trtype": "tcp", 00:38:09.023 "traddr": "127.0.0.1", 00:38:09.023 "adrfam": "ipv4", 00:38:09.023 "trsvcid": "4420", 00:38:09.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:09.023 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:09.023 "prchk_reftag": false, 00:38:09.024 "prchk_guard": false, 00:38:09.024 "hdgst": false, 00:38:09.024 "ddgst": false, 00:38:09.024 "psk": "key0", 00:38:09.024 "allow_unrecognized_csi": false, 00:38:09.024 "method": "bdev_nvme_attach_controller", 00:38:09.024 "req_id": 1 00:38:09.024 } 00:38:09.024 Got JSON-RPC error response 00:38:09.024 response: 00:38:09.024 { 00:38:09.024 "code": -19, 00:38:09.024 "message": "No such device" 00:38:09.024 } 00:38:09.024 13:34:50 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:09.024 13:34:50 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:09.024 13:34:50 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:09.024 13:34:50 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:09.024 13:34:50 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:09.024 13:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:09.284 13:34:50 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:09.284 13:34:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:09.284 13:34:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:09.284 13:34:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:09.284 13:34:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:09.284 13:34:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:09.284 13:34:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BppoVpXlHT 00:38:09.285 13:34:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:09.285 13:34:50 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:09.285 13:34:50 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:09.285 13:34:50 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:09.285 13:34:50 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:09.285 13:34:50 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:09.285 13:34:50 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:09.285 13:34:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BppoVpXlHT 00:38:09.285 13:34:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BppoVpXlHT 00:38:09.285 13:34:51 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.BppoVpXlHT 00:38:09.285 13:34:51 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BppoVpXlHT 00:38:09.285 13:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BppoVpXlHT 00:38:09.546 13:34:51 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.546 13:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:09.546 nvme0n1 00:38:09.546 13:34:51 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:09.546 13:34:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:09.546 13:34:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:09.806 13:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:09.806 13:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:09.806 13:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:09.806 13:34:51 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:09.806 13:34:51 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:09.806 13:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:10.067 13:34:51 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:10.067 13:34:51 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:10.067 13:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.067 13:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.067 13:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:10.327 13:34:51 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:10.327 13:34:51 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:10.327 13:34:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:10.327 13:34:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:10.327 13:34:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:10.327 13:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.327 13:34:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:10.327 13:34:52 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:10.327 13:34:52 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:10.327 13:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:10.625 13:34:52 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:10.625 13:34:52 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:10.625 13:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:10.625 13:34:52 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:10.625 13:34:52 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BppoVpXlHT 00:38:10.625 13:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BppoVpXlHT 00:38:10.886 13:34:52 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9wMYTNB0T5 00:38:10.886 13:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9wMYTNB0T5 00:38:11.147 13:34:52 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:11.147 13:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:11.147 nvme0n1 00:38:11.407 13:34:53 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:11.407 13:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:11.669 13:34:53 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:11.669 "subsystems": [ 00:38:11.669 { 00:38:11.669 "subsystem": "keyring", 00:38:11.669 "config": [ 00:38:11.669 { 00:38:11.669 "method": "keyring_file_add_key", 00:38:11.669 "params": { 00:38:11.669 "name": "key0", 00:38:11.669 "path": "/tmp/tmp.BppoVpXlHT" 00:38:11.669 } 00:38:11.669 }, 00:38:11.669 { 00:38:11.669 "method": "keyring_file_add_key", 00:38:11.669 "params": { 00:38:11.669 "name": "key1", 00:38:11.669 "path": "/tmp/tmp.9wMYTNB0T5" 00:38:11.669 } 00:38:11.669 } 00:38:11.669 ] 00:38:11.669 }, 00:38:11.669 { 00:38:11.669 "subsystem": "iobuf", 00:38:11.669 "config": [ 00:38:11.669 { 00:38:11.669 "method": "iobuf_set_options", 00:38:11.669 "params": { 00:38:11.669 "small_pool_count": 8192, 00:38:11.669 "large_pool_count": 1024, 00:38:11.669 "small_bufsize": 8192, 00:38:11.669 "large_bufsize": 135168, 00:38:11.669 "enable_numa": false 00:38:11.669 } 00:38:11.669 } 00:38:11.669 ] 00:38:11.669 }, 00:38:11.669 { 00:38:11.669 "subsystem": "sock", 00:38:11.669 "config": [ 00:38:11.669 { 00:38:11.669 "method": "sock_set_default_impl", 00:38:11.669 "params": { 00:38:11.669 "impl_name": "posix" 00:38:11.669 } 00:38:11.669 }, 00:38:11.669 { 00:38:11.669 "method": "sock_impl_set_options", 00:38:11.669 "params": { 00:38:11.669 "impl_name": "ssl", 00:38:11.669 "recv_buf_size": 4096, 00:38:11.669 "send_buf_size": 4096, 00:38:11.669 "enable_recv_pipe": true, 00:38:11.669 "enable_quickack": false, 00:38:11.669 "enable_placement_id": 0, 00:38:11.669 "enable_zerocopy_send_server": true, 00:38:11.669 "enable_zerocopy_send_client": false, 00:38:11.669 "zerocopy_threshold": 0, 00:38:11.669 "tls_version": 0, 00:38:11.669 "enable_ktls": false 00:38:11.669 } 00:38:11.669 }, 00:38:11.669 { 00:38:11.669 "method": "sock_impl_set_options", 00:38:11.669 "params": { 00:38:11.669 "impl_name": "posix", 00:38:11.669 "recv_buf_size": 2097152, 00:38:11.669 "send_buf_size": 2097152, 00:38:11.669 "enable_recv_pipe": true, 00:38:11.669 "enable_quickack": false, 00:38:11.669 "enable_placement_id": 0, 00:38:11.669 "enable_zerocopy_send_server": true, 00:38:11.669 "enable_zerocopy_send_client": false, 00:38:11.669 "zerocopy_threshold": 0, 00:38:11.669 "tls_version": 0, 00:38:11.669 "enable_ktls": false 00:38:11.669 } 00:38:11.669 } 00:38:11.669 ] 00:38:11.669 }, 00:38:11.669 { 00:38:11.669 "subsystem": "vmd", 00:38:11.669 "config": [] 00:38:11.669 }, 00:38:11.669 { 00:38:11.669 "subsystem": "accel", 00:38:11.669 "config": [ 00:38:11.669 { 00:38:11.669 "method": "accel_set_options", 00:38:11.669 "params": { 00:38:11.669 "small_cache_size": 128, 00:38:11.669 "large_cache_size": 16, 00:38:11.669 "task_count": 2048, 00:38:11.669 "sequence_count": 2048, 00:38:11.669 "buf_count": 2048 00:38:11.669 } 00:38:11.669 } 00:38:11.669 ] 00:38:11.669 }, 00:38:11.669 { 00:38:11.669 "subsystem": "bdev", 00:38:11.669 "config": [ 00:38:11.669 { 00:38:11.669 "method": "bdev_set_options", 00:38:11.669 "params": { 00:38:11.669 "bdev_io_pool_size": 65535, 00:38:11.669 "bdev_io_cache_size": 256, 00:38:11.669 "bdev_auto_examine": true, 00:38:11.669 "iobuf_small_cache_size": 128, 00:38:11.669 "iobuf_large_cache_size": 16 00:38:11.669 } 00:38:11.669 }, 00:38:11.669 { 00:38:11.669 "method": "bdev_raid_set_options", 00:38:11.669 "params": { 00:38:11.670 "process_window_size_kb": 1024, 00:38:11.670 "process_max_bandwidth_mb_sec": 0 00:38:11.670 } 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "method": "bdev_iscsi_set_options", 00:38:11.670 "params": { 00:38:11.670 "timeout_sec": 30 00:38:11.670 } 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "method": "bdev_nvme_set_options", 00:38:11.670 "params": { 00:38:11.670 "action_on_timeout": "none", 00:38:11.670 "timeout_us": 0, 00:38:11.670 "timeout_admin_us": 0, 00:38:11.670 "keep_alive_timeout_ms": 10000, 00:38:11.670 "arbitration_burst": 0, 00:38:11.670 "low_priority_weight": 0, 00:38:11.670 "medium_priority_weight": 0, 00:38:11.670 "high_priority_weight": 0, 00:38:11.670 "nvme_adminq_poll_period_us": 10000, 00:38:11.670 "nvme_ioq_poll_period_us": 0, 00:38:11.670 "io_queue_requests": 512, 00:38:11.670 "delay_cmd_submit": true, 00:38:11.670 "transport_retry_count": 4, 00:38:11.670 "bdev_retry_count": 3, 00:38:11.670 "transport_ack_timeout": 0, 00:38:11.670 "ctrlr_loss_timeout_sec": 0, 00:38:11.670 "reconnect_delay_sec": 0, 00:38:11.670 "fast_io_fail_timeout_sec": 0, 00:38:11.670 "disable_auto_failback": false, 00:38:11.670 "generate_uuids": false, 00:38:11.670 "transport_tos": 0, 00:38:11.670 "nvme_error_stat": false, 00:38:11.670 "rdma_srq_size": 0, 00:38:11.670 "io_path_stat": false, 00:38:11.670 "allow_accel_sequence": false, 00:38:11.670 "rdma_max_cq_size": 0, 00:38:11.670 "rdma_cm_event_timeout_ms": 0, 00:38:11.670 "dhchap_digests": [ 00:38:11.670 "sha256", 00:38:11.670 "sha384", 00:38:11.670 "sha512" 00:38:11.670 ], 00:38:11.670 "dhchap_dhgroups": [ 00:38:11.670 "null", 00:38:11.670 "ffdhe2048", 00:38:11.670 "ffdhe3072", 00:38:11.670 "ffdhe4096", 00:38:11.670 "ffdhe6144", 00:38:11.670 "ffdhe8192" 00:38:11.670 ] 00:38:11.670 } 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "method": "bdev_nvme_attach_controller", 00:38:11.670 "params": { 00:38:11.670 "name": "nvme0", 00:38:11.670 "trtype": "TCP", 00:38:11.670 "adrfam": "IPv4", 00:38:11.670 "traddr": "127.0.0.1", 00:38:11.670 "trsvcid": "4420", 00:38:11.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:11.670 "prchk_reftag": false, 00:38:11.670 "prchk_guard": false, 00:38:11.670 "ctrlr_loss_timeout_sec": 0, 00:38:11.670 "reconnect_delay_sec": 0, 00:38:11.670 "fast_io_fail_timeout_sec": 0, 00:38:11.670 "psk": "key0", 00:38:11.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:11.670 "hdgst": false, 00:38:11.670 "ddgst": false, 00:38:11.670 "multipath": "multipath" 00:38:11.670 } 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "method": "bdev_nvme_set_hotplug", 00:38:11.670 "params": { 00:38:11.670 "period_us": 100000, 00:38:11.670 "enable": false 00:38:11.670 } 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "method": "bdev_wait_for_examine" 00:38:11.670 } 00:38:11.670 ] 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "subsystem": "nbd", 00:38:11.670 "config": [] 00:38:11.670 } 00:38:11.670 ] 00:38:11.670 }' 00:38:11.670 13:34:53 keyring_file -- keyring/file.sh@115 -- # killprocess 2061072 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2061072 ']' 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2061072 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2061072 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2061072' 00:38:11.670 killing process with pid 2061072 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@971 -- # kill 2061072 00:38:11.670 Received shutdown signal, test time was about 1.000000 seconds 00:38:11.670 00:38:11.670 Latency(us) 00:38:11.670 [2024-11-06T12:34:53.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.670 [2024-11-06T12:34:53.572Z] =================================================================================================================== 00:38:11.670 [2024-11-06T12:34:53.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@976 -- # wait 2061072 00:38:11.670 13:34:53 keyring_file -- keyring/file.sh@118 -- # bperfpid=2062885 00:38:11.670 13:34:53 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2062885 /var/tmp/bperf.sock 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2062885 ']' 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:11.670 13:34:53 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:11.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:11.670 13:34:53 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:11.670 13:34:53 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:11.670 "subsystems": [ 00:38:11.670 { 00:38:11.670 "subsystem": "keyring", 00:38:11.670 "config": [ 00:38:11.670 { 00:38:11.670 "method": "keyring_file_add_key", 00:38:11.670 "params": { 00:38:11.670 "name": "key0", 00:38:11.670 "path": "/tmp/tmp.BppoVpXlHT" 00:38:11.670 } 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "method": "keyring_file_add_key", 00:38:11.670 "params": { 00:38:11.670 "name": "key1", 00:38:11.670 "path": "/tmp/tmp.9wMYTNB0T5" 00:38:11.670 } 00:38:11.670 } 00:38:11.670 ] 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "subsystem": "iobuf", 00:38:11.670 "config": [ 00:38:11.670 { 00:38:11.670 "method": "iobuf_set_options", 00:38:11.670 "params": { 00:38:11.670 "small_pool_count": 8192, 00:38:11.670 "large_pool_count": 1024, 00:38:11.670 "small_bufsize": 8192, 00:38:11.670 "large_bufsize": 135168, 00:38:11.670 "enable_numa": false 00:38:11.670 } 00:38:11.670 } 00:38:11.670 ] 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "subsystem": "sock", 00:38:11.670 "config": [ 00:38:11.670 { 00:38:11.670 "method": "sock_set_default_impl", 00:38:11.670 "params": { 00:38:11.670 "impl_name": "posix" 00:38:11.670 } 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "method": "sock_impl_set_options", 00:38:11.670 "params": { 00:38:11.670 "impl_name": "ssl", 00:38:11.670 "recv_buf_size": 4096, 00:38:11.670 "send_buf_size": 4096, 00:38:11.670 "enable_recv_pipe": true, 00:38:11.670 "enable_quickack": false, 00:38:11.670 "enable_placement_id": 0, 00:38:11.670 "enable_zerocopy_send_server": true, 00:38:11.670 "enable_zerocopy_send_client": false, 00:38:11.670 "zerocopy_threshold": 0, 00:38:11.670 "tls_version": 0, 00:38:11.670 "enable_ktls": false 00:38:11.670 } 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "method": "sock_impl_set_options", 00:38:11.670 "params": { 00:38:11.670 "impl_name": "posix", 00:38:11.670 "recv_buf_size": 2097152, 00:38:11.670 "send_buf_size": 2097152, 00:38:11.670 "enable_recv_pipe": true, 00:38:11.670 "enable_quickack": false, 00:38:11.670 "enable_placement_id": 0, 00:38:11.670 "enable_zerocopy_send_server": true, 00:38:11.670 "enable_zerocopy_send_client": false, 00:38:11.670 "zerocopy_threshold": 0, 00:38:11.670 "tls_version": 0, 00:38:11.670 "enable_ktls": false 00:38:11.670 } 00:38:11.670 } 00:38:11.670 ] 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "subsystem": "vmd", 00:38:11.670 "config": [] 00:38:11.670 }, 00:38:11.670 { 00:38:11.670 "subsystem": "accel", 00:38:11.670 "config": [ 00:38:11.670 { 00:38:11.670 "method": "accel_set_options", 00:38:11.670 "params": { 00:38:11.670 "small_cache_size": 128, 00:38:11.670 "large_cache_size": 16, 00:38:11.670 "task_count": 2048, 00:38:11.670 "sequence_count": 2048, 00:38:11.670 "buf_count": 2048 00:38:11.670 } 00:38:11.670 } 00:38:11.670 ] 00:38:11.670 }, 00:38:11.670 { 00:38:11.671 "subsystem": "bdev", 00:38:11.671 "config": [ 00:38:11.671 { 00:38:11.671 "method": "bdev_set_options", 00:38:11.671 "params": { 00:38:11.671 "bdev_io_pool_size": 65535, 00:38:11.671 "bdev_io_cache_size": 256, 00:38:11.671 "bdev_auto_examine": true, 00:38:11.671 "iobuf_small_cache_size": 128, 00:38:11.671 "iobuf_large_cache_size": 16 00:38:11.671 } 00:38:11.671 }, 00:38:11.671 { 00:38:11.671 "method": "bdev_raid_set_options", 00:38:11.671 "params": { 00:38:11.671 "process_window_size_kb": 1024, 00:38:11.671 "process_max_bandwidth_mb_sec": 0 00:38:11.671 } 00:38:11.671 }, 00:38:11.671 { 00:38:11.671 "method": "bdev_iscsi_set_options", 00:38:11.671 "params": { 00:38:11.671 "timeout_sec": 30 00:38:11.671 } 00:38:11.671 }, 00:38:11.671 { 00:38:11.671 "method": "bdev_nvme_set_options", 00:38:11.671 "params": { 00:38:11.671 "action_on_timeout": "none", 00:38:11.671 "timeout_us": 0, 00:38:11.671 "timeout_admin_us": 0, 00:38:11.671 "keep_alive_timeout_ms": 10000, 00:38:11.671 "arbitration_burst": 0, 00:38:11.671 "low_priority_weight": 0, 00:38:11.671 "medium_priority_weight": 0, 00:38:11.671 "high_priority_weight": 0, 00:38:11.671 "nvme_adminq_poll_period_us": 10000, 00:38:11.671 "nvme_ioq_poll_period_us": 0, 00:38:11.671 "io_queue_requests": 512, 00:38:11.671 "delay_cmd_submit": true, 00:38:11.671 "transport_retry_count": 4, 00:38:11.671 "bdev_retry_count": 3, 00:38:11.671 "transport_ack_timeout": 0, 00:38:11.671 "ctrlr_loss_timeout_sec": 0, 00:38:11.671 "reconnect_delay_sec": 0, 00:38:11.671 "fast_io_fail_timeout_sec": 0, 00:38:11.671 "disable_auto_failback": false, 00:38:11.671 "generate_uuids": false, 00:38:11.671 "transport_tos": 0, 00:38:11.671 "nvme_error_stat": false, 00:38:11.671 "rdma_srq_size": 0, 00:38:11.671 13:34:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:11.671 "io_path_stat": false, 00:38:11.671 "allow_accel_sequence": false, 00:38:11.671 "rdma_max_cq_size": 0, 00:38:11.671 "rdma_cm_event_timeout_ms": 0, 00:38:11.671 "dhchap_digests": [ 00:38:11.671 "sha256", 00:38:11.671 "sha384", 00:38:11.671 "sha512" 00:38:11.671 ], 00:38:11.671 "dhchap_dhgroups": [ 00:38:11.671 "null", 00:38:11.671 "ffdhe2048", 00:38:11.671 "ffdhe3072", 00:38:11.671 "ffdhe4096", 00:38:11.671 "ffdhe6144", 00:38:11.671 "ffdhe8192" 00:38:11.671 ] 00:38:11.671 } 00:38:11.671 }, 00:38:11.671 { 00:38:11.671 "method": "bdev_nvme_attach_controller", 00:38:11.671 "params": { 00:38:11.671 "name": "nvme0", 00:38:11.671 "trtype": "TCP", 00:38:11.671 "adrfam": "IPv4", 00:38:11.671 "traddr": "127.0.0.1", 00:38:11.671 "trsvcid": "4420", 00:38:11.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:11.671 "prchk_reftag": false, 00:38:11.671 "prchk_guard": false, 00:38:11.671 "ctrlr_loss_timeout_sec": 0, 00:38:11.671 "reconnect_delay_sec": 0, 00:38:11.671 "fast_io_fail_timeout_sec": 0, 00:38:11.671 "psk": "key0", 00:38:11.671 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:11.671 "hdgst": false, 00:38:11.671 "ddgst": false, 00:38:11.671 "multipath": "multipath" 00:38:11.671 } 00:38:11.671 }, 00:38:11.671 { 00:38:11.671 "method": "bdev_nvme_set_hotplug", 00:38:11.671 "params": { 00:38:11.671 "period_us": 100000, 00:38:11.671 "enable": false 00:38:11.671 } 00:38:11.671 }, 00:38:11.671 { 00:38:11.671 "method": "bdev_wait_for_examine" 00:38:11.671 } 00:38:11.671 ] 00:38:11.671 }, 00:38:11.671 { 00:38:11.671 "subsystem": "nbd", 00:38:11.671 "config": [] 00:38:11.671 } 00:38:11.671 ] 00:38:11.671 }' 00:38:11.671 [2024-11-06 13:34:53.525501] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:38:11.671 [2024-11-06 13:34:53.525555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2062885 ] 00:38:11.931 [2024-11-06 13:34:53.610204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.931 [2024-11-06 13:34:53.638729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.931 [2024-11-06 13:34:53.781762] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:12.503 13:34:54 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:12.503 13:34:54 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:12.503 13:34:54 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:12.503 13:34:54 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:12.503 13:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:12.764 13:34:54 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:12.764 13:34:54 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:12.764 13:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:12.764 13:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:12.764 13:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:12.764 13:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:12.764 13:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:13.025 13:34:54 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:13.025 13:34:54 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:13.025 13:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:13.025 13:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:13.025 13:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:13.025 13:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:13.025 13:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:13.025 13:34:54 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:13.025 13:34:54 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:13.025 13:34:54 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:13.025 13:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:13.286 13:34:55 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:13.286 13:34:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:13.286 13:34:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.BppoVpXlHT /tmp/tmp.9wMYTNB0T5 00:38:13.286 13:34:55 keyring_file -- keyring/file.sh@20 -- # killprocess 2062885 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2062885 ']' 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2062885 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2062885 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2062885' 00:38:13.286 killing process with pid 2062885 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@971 -- # kill 2062885 00:38:13.286 Received shutdown signal, test time was about 1.000000 seconds 00:38:13.286 00:38:13.286 Latency(us) 00:38:13.286 [2024-11-06T12:34:55.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.286 [2024-11-06T12:34:55.188Z] =================================================================================================================== 00:38:13.286 [2024-11-06T12:34:55.188Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:13.286 13:34:55 keyring_file -- common/autotest_common.sh@976 -- # wait 2062885 00:38:13.546 13:34:55 keyring_file -- keyring/file.sh@21 -- # killprocess 2061042 00:38:13.546 13:34:55 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2061042 ']' 00:38:13.546 13:34:55 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2061042 00:38:13.546 13:34:55 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:13.546 13:34:55 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:13.546 13:34:55 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2061042 00:38:13.546 13:34:55 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:13.547 13:34:55 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:13.547 13:34:55 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2061042' 00:38:13.547 killing process with pid 2061042 00:38:13.547 13:34:55 keyring_file -- common/autotest_common.sh@971 -- # kill 2061042 00:38:13.547 13:34:55 keyring_file -- common/autotest_common.sh@976 -- # wait 2061042 00:38:13.807 00:38:13.807 real 0m12.014s 00:38:13.807 user 0m29.098s 00:38:13.807 sys 0m2.650s 00:38:13.807 13:34:55 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:13.807 13:34:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:13.807 ************************************ 00:38:13.807 END TEST keyring_file 00:38:13.807 ************************************ 00:38:13.807 13:34:55 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:13.807 13:34:55 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:13.807 13:34:55 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:13.807 13:34:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:13.808 13:34:55 -- common/autotest_common.sh@10 -- # set +x 00:38:13.808 ************************************ 00:38:13.808 START TEST keyring_linux 00:38:13.808 ************************************ 00:38:13.808 13:34:55 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:13.808 Joined session keyring: 441694587 00:38:13.808 * Looking for test storage... 00:38:13.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:13.808 13:34:55 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:13.808 13:34:55 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:38:13.808 13:34:55 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:14.070 13:34:55 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:14.070 13:34:55 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.070 13:34:55 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:14.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.070 --rc genhtml_branch_coverage=1 00:38:14.070 --rc genhtml_function_coverage=1 00:38:14.070 --rc genhtml_legend=1 00:38:14.070 --rc geninfo_all_blocks=1 00:38:14.070 --rc geninfo_unexecuted_blocks=1 00:38:14.070 00:38:14.070 ' 00:38:14.070 13:34:55 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:14.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.070 --rc genhtml_branch_coverage=1 00:38:14.070 --rc genhtml_function_coverage=1 00:38:14.070 --rc genhtml_legend=1 00:38:14.070 --rc geninfo_all_blocks=1 00:38:14.070 --rc geninfo_unexecuted_blocks=1 00:38:14.070 00:38:14.070 ' 00:38:14.070 13:34:55 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:14.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.070 --rc genhtml_branch_coverage=1 00:38:14.070 --rc genhtml_function_coverage=1 00:38:14.070 --rc genhtml_legend=1 00:38:14.070 --rc geninfo_all_blocks=1 00:38:14.070 --rc geninfo_unexecuted_blocks=1 00:38:14.070 00:38:14.070 ' 00:38:14.070 13:34:55 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:14.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.070 --rc genhtml_branch_coverage=1 00:38:14.070 --rc genhtml_function_coverage=1 00:38:14.070 --rc genhtml_legend=1 00:38:14.070 --rc geninfo_all_blocks=1 00:38:14.070 --rc geninfo_unexecuted_blocks=1 00:38:14.070 00:38:14.070 ' 00:38:14.070 13:34:55 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:14.070 13:34:55 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:14.070 13:34:55 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.070 13:34:55 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.070 13:34:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.070 13:34:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.070 13:34:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.070 13:34:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:14.071 13:34:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:14.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:14.071 /tmp/:spdk-test:key0 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:14.071 13:34:55 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:14.071 13:34:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:14.071 /tmp/:spdk-test:key1 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2063340 00:38:14.071 13:34:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2063340 00:38:14.071 13:34:55 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2063340 ']' 00:38:14.071 13:34:55 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.071 13:34:55 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:14.071 13:34:55 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.071 13:34:55 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:14.071 13:34:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:14.071 [2024-11-06 13:34:55.908053] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:38:14.071 [2024-11-06 13:34:55.908128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2063340 ] 00:38:14.332 [2024-11-06 13:34:55.996743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.332 [2024-11-06 13:34:56.032215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:14.904 13:34:56 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:14.904 [2024-11-06 13:34:56.708748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.904 null0 00:38:14.904 [2024-11-06 13:34:56.740794] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:14.904 [2024-11-06 13:34:56.741143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.904 13:34:56 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:14.904 321011265 00:38:14.904 13:34:56 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:14.904 855304285 00:38:14.904 13:34:56 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2063657 00:38:14.904 13:34:56 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2063657 /var/tmp/bperf.sock 00:38:14.904 13:34:56 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2063657 ']' 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:14.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:14.904 13:34:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:15.165 [2024-11-06 13:34:56.827276] Starting SPDK v25.01-pre git sha1 adaafacab / DPDK 24.03.0 initialization... 00:38:15.165 [2024-11-06 13:34:56.827336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2063657 ] 00:38:15.165 [2024-11-06 13:34:56.912365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.165 [2024-11-06 13:34:56.942394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:15.735 13:34:57 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:15.735 13:34:57 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:15.735 13:34:57 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:15.735 13:34:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:15.996 13:34:57 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:15.996 13:34:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:16.257 13:34:57 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:16.257 13:34:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:16.257 [2024-11-06 13:34:58.122623] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:16.518 nvme0n1 00:38:16.518 13:34:58 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:16.518 13:34:58 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:16.518 13:34:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:16.518 13:34:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:16.518 13:34:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:16.518 13:34:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:16.518 13:34:58 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:16.518 13:34:58 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:16.518 13:34:58 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:16.518 13:34:58 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:16.518 13:34:58 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:16.518 13:34:58 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:16.518 13:34:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:16.779 13:34:58 keyring_linux -- keyring/linux.sh@25 -- # sn=321011265 00:38:16.779 13:34:58 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:16.779 13:34:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:16.779 13:34:58 keyring_linux -- keyring/linux.sh@26 -- # [[ 321011265 == \3\2\1\0\1\1\2\6\5 ]] 00:38:16.779 13:34:58 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 321011265 00:38:16.779 13:34:58 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:16.779 13:34:58 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:16.779 Running I/O for 1 seconds... 00:38:18.164 24407.00 IOPS, 95.34 MiB/s 00:38:18.164 Latency(us) 00:38:18.164 [2024-11-06T12:35:00.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.164 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:18.164 nvme0n1 : 1.01 24407.47 95.34 0.00 0.00 5229.44 4014.08 8465.07 00:38:18.164 [2024-11-06T12:35:00.066Z] =================================================================================================================== 00:38:18.164 [2024-11-06T12:35:00.066Z] Total : 24407.47 95.34 0.00 0.00 5229.44 4014.08 8465.07 00:38:18.164 { 00:38:18.164 "results": [ 00:38:18.164 { 00:38:18.164 "job": "nvme0n1", 00:38:18.164 "core_mask": "0x2", 00:38:18.164 "workload": "randread", 00:38:18.164 "status": "finished", 00:38:18.164 "queue_depth": 128, 00:38:18.164 "io_size": 4096, 00:38:18.164 "runtime": 1.005225, 00:38:18.164 "iops": 24407.470964211992, 00:38:18.164 "mibps": 95.3416834539531, 00:38:18.164 "io_failed": 0, 00:38:18.164 "io_timeout": 0, 00:38:18.164 "avg_latency_us": 5229.441469465389, 00:38:18.164 "min_latency_us": 4014.08, 00:38:18.164 "max_latency_us": 8465.066666666668 00:38:18.164 } 00:38:18.164 ], 00:38:18.164 "core_count": 1 00:38:18.164 } 00:38:18.164 13:34:59 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:18.164 13:34:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:18.164 13:34:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:18.164 13:34:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:18.164 13:34:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:18.164 13:34:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:18.164 13:34:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:18.164 13:34:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:18.425 13:35:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:18.425 [2024-11-06 13:35:00.250065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:18.425 [2024-11-06 13:35:00.250073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154a60 (107): Transport endpoint is not connected 00:38:18.425 [2024-11-06 13:35:00.251069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154a60 (9): Bad file descriptor 00:38:18.425 [2024-11-06 13:35:00.252070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:18.425 [2024-11-06 13:35:00.252078] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:18.425 [2024-11-06 13:35:00.252085] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:18.425 [2024-11-06 13:35:00.252091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:18.425 request: 00:38:18.425 { 00:38:18.425 "name": "nvme0", 00:38:18.425 "trtype": "tcp", 00:38:18.425 "traddr": "127.0.0.1", 00:38:18.425 "adrfam": "ipv4", 00:38:18.425 "trsvcid": "4420", 00:38:18.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:18.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:18.425 "prchk_reftag": false, 00:38:18.425 "prchk_guard": false, 00:38:18.425 "hdgst": false, 00:38:18.425 "ddgst": false, 00:38:18.425 "psk": ":spdk-test:key1", 00:38:18.425 "allow_unrecognized_csi": false, 00:38:18.425 "method": "bdev_nvme_attach_controller", 00:38:18.425 "req_id": 1 00:38:18.425 } 00:38:18.425 Got JSON-RPC error response 00:38:18.425 response: 00:38:18.425 { 00:38:18.425 "code": -5, 00:38:18.425 "message": "Input/output error" 00:38:18.425 } 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:18.425 13:35:00 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@33 -- # sn=321011265 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 321011265 00:38:18.425 1 links removed 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@33 -- # sn=855304285 00:38:18.425 13:35:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 855304285 00:38:18.425 1 links removed 00:38:18.426 13:35:00 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2063657 00:38:18.426 13:35:00 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2063657 ']' 00:38:18.426 13:35:00 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2063657 00:38:18.426 13:35:00 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:18.426 13:35:00 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:18.426 13:35:00 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2063657 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2063657' 00:38:18.686 killing process with pid 2063657 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@971 -- # kill 2063657 00:38:18.686 Received shutdown signal, test time was about 1.000000 seconds 00:38:18.686 00:38:18.686 Latency(us) 00:38:18.686 [2024-11-06T12:35:00.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.686 [2024-11-06T12:35:00.588Z] =================================================================================================================== 00:38:18.686 [2024-11-06T12:35:00.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@976 -- # wait 2063657 00:38:18.686 13:35:00 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2063340 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2063340 ']' 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2063340 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2063340 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2063340' 00:38:18.686 killing process with pid 2063340 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@971 -- # kill 2063340 00:38:18.686 13:35:00 keyring_linux -- common/autotest_common.sh@976 -- # wait 2063340 00:38:18.947 00:38:18.947 real 0m5.181s 00:38:18.947 user 0m9.620s 00:38:18.947 sys 0m1.462s 00:38:18.947 13:35:00 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:18.947 13:35:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:18.947 ************************************ 00:38:18.947 END TEST keyring_linux 00:38:18.947 ************************************ 00:38:18.947 13:35:00 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:18.947 13:35:00 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:38:18.947 13:35:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:18.947 13:35:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:18.947 13:35:00 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:38:18.947 13:35:00 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:38:18.947 13:35:00 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:38:18.947 13:35:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:18.947 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:38:18.947 13:35:00 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:38:18.947 13:35:00 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:38:18.947 13:35:00 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:38:18.947 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:38:27.198 INFO: APP EXITING 00:38:27.198 INFO: killing all VMs 00:38:27.198 INFO: killing vhost app 00:38:27.198 INFO: EXIT DONE 00:38:30.503 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:30.503 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:30.503 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:34.710 Cleaning 00:38:34.710 Removing: /var/run/dpdk/spdk0/config 00:38:34.710 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:34.710 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:34.710 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:34.710 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:34.710 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:34.710 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:34.710 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:34.710 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:34.710 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:34.710 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:34.710 Removing: /var/run/dpdk/spdk1/config 00:38:34.710 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:34.710 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:34.710 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:34.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:34.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:34.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:34.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:34.711 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:34.711 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:34.711 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:34.711 Removing: /var/run/dpdk/spdk2/config 00:38:34.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:34.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:34.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:34.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:34.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:34.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:34.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:34.711 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:34.711 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:34.711 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:34.711 Removing: /var/run/dpdk/spdk3/config 00:38:34.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:34.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:34.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:34.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:34.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:34.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:34.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:34.711 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:34.711 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:34.711 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:34.711 Removing: /var/run/dpdk/spdk4/config 00:38:34.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:34.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:34.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:34.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:34.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:34.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:34.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:34.711 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:34.711 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:34.711 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:34.711 Removing: /dev/shm/bdev_svc_trace.1 00:38:34.711 Removing: /dev/shm/nvmf_trace.0 00:38:34.711 Removing: /dev/shm/spdk_tgt_trace.pid1482194 00:38:34.711 Removing: /var/run/dpdk/spdk0 00:38:34.711 Removing: /var/run/dpdk/spdk1 00:38:34.711 Removing: /var/run/dpdk/spdk2 00:38:34.711 Removing: /var/run/dpdk/spdk3 00:38:34.711 Removing: /var/run/dpdk/spdk4 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1480703 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1482194 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1483041 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1484080 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1484429 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1485500 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1485730 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1485963 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1487103 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1487892 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1488281 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1488621 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1488985 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1489270 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1489534 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1489886 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1490276 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1491343 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1494857 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1495170 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1495502 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1495681 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1496066 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1496218 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1496755 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1496774 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1497139 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1497348 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1497509 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1497795 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1498289 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1498588 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1498887 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1503599 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1509013 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1521122 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1521806 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1527672 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1528148 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1533251 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1540388 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1543705 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1556231 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1567205 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1569377 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1570554 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1592221 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1597034 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1654430 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1660860 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1668063 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1675997 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1675999 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1677001 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1678006 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1679011 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1679681 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1679692 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1680021 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1680085 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1680209 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1681284 00:38:34.711 Removing: /var/run/dpdk/spdk_pid1682299 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1683432 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1684046 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1684155 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1684395 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1686240 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1687471 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1697490 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1731832 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1737293 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1739159 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1741445 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1741791 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1742061 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1742229 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1743080 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1745217 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1746607 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1747000 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1749719 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1750423 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1751140 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1756233 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1762972 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1762973 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1762974 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1767691 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1778705 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1783625 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1790968 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1792466 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1794082 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1795830 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1801407 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1806753 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1811831 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1821105 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1821212 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1826447 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1826778 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1826979 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1827516 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1827523 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1833482 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1834218 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1839636 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1842797 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1849543 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1856112 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1866377 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1875097 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1875099 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1898938 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1899630 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1900316 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1901002 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1902069 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1902759 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1903624 00:38:34.972 Removing: /var/run/dpdk/spdk_pid1904436 00:38:34.973 Removing: /var/run/dpdk/spdk_pid1909526 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1909860 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1917198 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1917336 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1923797 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1928958 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1941170 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1941865 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1947148 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1947502 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1952574 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1959324 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1962402 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1974624 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1985961 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1987966 00:38:35.234 Removing: /var/run/dpdk/spdk_pid1988982 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2008689 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2013443 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2016644 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2024415 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2024420 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2030489 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2032863 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2035243 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2037136 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2039444 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2040854 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2050983 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2051520 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2052182 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2055141 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2055806 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2056231 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2061042 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2061072 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2062885 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2063340 00:38:35.234 Removing: /var/run/dpdk/spdk_pid2063657 00:38:35.234 Clean 00:38:35.234 13:35:17 -- common/autotest_common.sh@1451 -- # return 0 00:38:35.234 13:35:17 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:35.234 13:35:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:35.234 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:38:35.495 13:35:17 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:35.495 13:35:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:35.495 13:35:17 -- common/autotest_common.sh@10 -- # set +x 00:38:35.495 13:35:17 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:35.495 13:35:17 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:35.495 13:35:17 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:35.495 13:35:17 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:35.495 13:35:17 -- spdk/autotest.sh@394 -- # hostname 00:38:35.495 13:35:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:35.755 geninfo: WARNING: invalid characters removed from testname! 00:39:02.331 13:35:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:03.714 13:35:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:05.623 13:35:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:07.005 13:35:48 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:08.912 13:35:50 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:10.297 13:35:52 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:12.206 13:35:53 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:12.206 13:35:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:12.206 13:35:53 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:12.206 13:35:53 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:12.206 13:35:53 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:12.206 13:35:53 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:12.206 + [[ -n 1395235 ]] 00:39:12.206 + sudo kill 1395235 00:39:12.216 [Pipeline] } 00:39:12.233 [Pipeline] // stage 00:39:12.238 [Pipeline] } 00:39:12.253 [Pipeline] // timeout 00:39:12.258 [Pipeline] } 00:39:12.272 [Pipeline] // catchError 00:39:12.277 [Pipeline] } 00:39:12.292 [Pipeline] // wrap 00:39:12.299 [Pipeline] } 00:39:12.312 [Pipeline] // catchError 00:39:12.321 [Pipeline] stage 00:39:12.324 [Pipeline] { (Epilogue) 00:39:12.338 [Pipeline] catchError 00:39:12.340 [Pipeline] { 00:39:12.353 [Pipeline] echo 00:39:12.355 Cleanup processes 00:39:12.361 [Pipeline] sh 00:39:12.651 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:12.651 2076722 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:12.666 [Pipeline] sh 00:39:12.959 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:12.959 ++ grep -v 'sudo pgrep' 00:39:12.959 ++ awk '{print $1}' 00:39:12.959 + sudo kill -9 00:39:12.959 + true 00:39:12.977 [Pipeline] sh 00:39:13.268 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:25.510 [Pipeline] sh 00:39:25.802 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:25.802 Artifacts sizes are good 00:39:25.816 [Pipeline] archiveArtifacts 00:39:25.823 Archiving artifacts 00:39:26.111 [Pipeline] sh 00:39:26.447 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:26.464 [Pipeline] cleanWs 00:39:26.475 [WS-CLEANUP] Deleting project workspace... 00:39:26.475 [WS-CLEANUP] Deferred wipeout is used... 00:39:26.483 [WS-CLEANUP] done 00:39:26.485 [Pipeline] } 00:39:26.504 [Pipeline] // catchError 00:39:26.519 [Pipeline] sh 00:39:26.811 + logger -p user.info -t JENKINS-CI 00:39:26.821 [Pipeline] } 00:39:26.835 [Pipeline] // stage 00:39:26.840 [Pipeline] } 00:39:26.855 [Pipeline] // node 00:39:26.861 [Pipeline] End of Pipeline 00:39:26.898 Finished: SUCCESS